Bertrand Schneider and Jenelle Wallace

For more details on the project (goal, learning theories, expected outcomes and so on), please consult this document.

Evolution of the project

Day 1 (Saturday): learning a lot of processing (basic stuff)


Day 2 (Sunday): still learning a lot of processing (libraries; TUIO and a physics library)


Day 3 (Monday): first trial with Tags and TUIO. Software side: basic springs between tags with the physics library. Brainstorming on how to mount the brain on the supports.


Day 4 (Tuesday): Building the physical support (table with a semi-transparent surface, camera, projector,…)


Day 5 (Wednesday): trying to get the tracking of the tags more efficient (e.g. with IR markers). No success (Bertrand spent quite some time testing Collin’s multitouch table to improve the way fiducials were tracked). The solution: getting bigger tags on vinyl stickers. Deciding on what should be the final setting: table built with acrylic sheets, projector from the SLATE system, camera to be bought (the logitech one is not very good).

The evolution of the tags (from left to right): paper, paper on acrylic, mirror acrylic, vynil sticker on acrylic

Final iteration: use the laser cutter to engrave the acrylic on one side to make the surface look frosted and minimize reflection through the tags


Day 6 (Thursday)

Jenelle is working on the physical part (putting magnets in the brain, building the supports for the tags, making all of the feducials, and so on); Bertrand is working on the software (visualizing potentials travelling across the brain). Brainstorming on how the next steps of the project.

Jenelle putting magnets on the brain and building the supports

Bertrand programming links between the tags


Day 7 (Friday):

– (Bertrand) trying to calibrate the webcam and the project; more difficult than planned. Fixing bugs in the software.

– (Jenelle) working on the supports for each brain part.

Also brainstorming on a conceptual level: a better representation of the axons should look like this:

Planning on adding Myelin sheath and Schwann’s cells to the axons

Next iteration on how to visualize an axon:

two brain parts, with the axon of a neuron stretched between them


Day 8 (Saturday):

Jenelle worked on the supports, and is almost done with it.
Bertrand fixed a few bugs with the way axons are visualized (thanks Shima!) and build a table for the system.


Day 9 (Sunday)

After several days of gluing my fingers together with five different types of glue (superglue, epoxy, gorilla glue, acrylic glue, superglue) I (Jenelle) finished the supports!

[Note: Acetone is great for dissolving almost all of the glue types listed above.]

Several useful tips about gluing that might seem self-evident but require more planning than I thought to deal with the uneven surface of our brain model: 

– If only using one support, try to put it at the balance point.

– Maximize the surface area of contact between the supports and the pieces.

– Use epoxy when space-filling glue is necessary – if the pieces don’t fit exactly to the shape of the supports.

– Top off with superglue around the edges of the epoxy – it seems to be stronger.


Day 10 (Monday)

The brain is now forming a network!


And we had our first user :)

Several users suggested that it was a bit distracting to move back and forth between the physical interface and our computer program when looking at cutting the connections. Based on this feedback, we decided that it would be nice to have the user interact with the physical system using an IR pen whose input could be picked up by a Wii remote. Therefore, Bertrand began working on getting the Wii remote to connect to the computer while Jenelle made an IR pen based on this design:
Day 11 (Tuesday)
Bertrand got the final setup for the projector working–we decided that it would be best to have the projector underneath the table rather than mounted above as we had originally planned. However, we realized that this led to another problem with our system: we were planning to project images on the surface of the brain, and we obviously couldn’t do this from underneath. We had already figured out that we couldn’t project directly onto the brain surface since it was too uneven to focus, but we had an idea of giving the user a “magnifying glass” that he or she could hold above the surface of the brain, and then images could be projected onto it. Unfortunately, with our new projector set-up, this idea was no longer feasible.
After some discussion, we came up with the idea of displaying the image underneath the selected brain piece. Jenelle noted that one thing she has had difficulty with in her study of neuroscience is relating the brain sections that researchers use to stain and study different regions with the 3D structure of the whole brain. Therefore, she came up with the idea of having a functionality where the user could scroll through brain sections from top to bottom and locate the region of interest.
We took images from an online brain atlas:
It took a bit of work to coordinate the images from the atlas with our physical model. Looking at both horizontal sections and coronal sections helped.
For the first prototype, I decided to just make horizontal sections to correspond with each piece. Of course, the final version would have horizontal, coronal, and sagittal sections. The work involved importing the images into Photoshop, removing the backgrounds (so the images would look nice on the screen), adjusting the colors, and pinpointing the regions involved in the visual pathway. Here’s one example image:

Day 11 (Tuesday)

Bertrand spent the day working on using the wiimote and improving the visual representation of the visual pathways of the brain. The user can now cut a connection with an IR pen:


Day 12 (Wednesday)

Building the user interface: there is now three buttons on the top of the screen, where you can select different modes:

  • Visual pathway, which displays simplified connections to highlight how information travels from the eyes to the visual cortex
  • Network, which is the default mode
  • Structure, which displays horizontal slides of the brain

Day 13 (Thursday)

We worked on the final calibration of the system in the atrium where the expo would be. We had the amazing realization that sunlight contains IR light (duh)! After lots of different solutions (we tried putting posterboard and fabric around the edges of our table), we decided that a black fabric shield was the only fix.

Below is a screenshot of the “structure” mode, where the user can go through the different slices of a specific part of the brain by moving the IR pen on the image.

We also loaded the brain slice images into the program, and struggled a bit with resizing them and getting the orientation correct (the projector flips the images horizontally, so the text was all backwards).


Day 14 (Friday) – The Presentation

We worked on the final calibration of the system in the atrium where the expo would be. We had the amazing realization that sunlight contains IR light (duh)! After lots of different solutions (we tried putting posterboard and fabric around the edges of our table), we decided that a black fabric shield was the only fix.

Above is Bertrand demonstrating the system to someone. Notice that there is a webcam between the two eyes: what the brain perceives is displayed on the bottom left corner of the table, thus the user can directly see how cutting different connections affect what the brain sees.