(5 comments, 12 posts)
This user hasn't shared any profile information
Posts by jenellewallace
Today, as I was building paddleboats with the fourth-graders in the after-school science mentoring program that I help run, some of the difficulties in addressing students’ misconceptions were made abundantly clear to me. This was the culminating lesson in a two-part series that challenged students to build self-propelled paddleboats, test them in troughs of water, and refine their designs based on those tests. The main challenge we focused on in building the boats was the means of propulsion—the students had previously done an introductory activity in which they propelled spools with rubber bands, so the intent was for students to use some sort of rubber band paddle device to make the boat move forward on its own.
As we were brainstorming a means of propulsion, a bright fourth-grader named Wendy became attached to the idea of using spinning CDs (which were among the materials offered) on the sides of the boat. She seemed to be laboring under the misconception that motion (specifically, rolling motion) moves objects forward rather than employing the commonly accepted idea that force causes motion. As we attempted to modify (rather than replace) her misconception, as diSessa and Roschelle would prefer, we initially decided that we should let her build her boat according to her own design and then test it to see what would happen, hoping that in the process she would notice that a paddle must push the water in order to move a boat forward. However, we were not able ‘to control the variables in our testing scenario sufficiently so that she could see this. Due to a bit of wind and the slight (unintentional, presumably) angle of the CDs, her boat did move forward, albeit not as quickly or efficiently as some other students’ designs.
I believe this is where computational literacy comes into the picture in addressing students’ misconceptions in the sciences. Building off the definition proposed by diSessa in the first chapter of his book Changing Minds, to me computational literacy means being able to harness the full power of computers to solve problems, that is to test theories and formulate new hypotheses. This probably means some programming ability, but I think the definition can also be broad enough to encompass other computational activities, such as manipulating graphical interfaces. If Wendy had been given the chance to model motion with a computer and test her hypotheses in this manner (similar to the activity in which high school students modeled the motion of a rocket and discovered Newton’s laws for themselves), perhaps she would have been better able to modify her misconception. In fact, rolling motion can move an object forward, but only if the parts of the object that roll come into contact with a surface that has friction, generating a forward force. In the actual classroom scenario, we were unsure how to lead Wendy to change her misconception when it seemed like her design worked well enough in a real-life testing scenario. This then, is one of the crucial advantages of teaching computational literacy—it allows hypothesis testing under ideal conditions, in which all the variables except the one to be tested are carefully controlled, an extremely difficult scenario to set up in real life. Of course, modeling should not wholly replace real-world experiments, but computationally literate students could use it to augment their data collection, and fruitful discussions could arise from discrepancies between models and actual experiments.
Jenelle, Nicole, Andrea
For our SLATE project, we decided to create an optics toolkit. The pieces in our toolkit are targeted to teach four main optics concepts: reflection, refraction, divergence, and convergence. The kit contains mirrors of three different shape
s: a plane, a triangular prism, and a semicircle. Through positioning the mirrors at different angles, a user can learn about the relationship between the angle of incidence and the angle of reflection of a beam of light. The kit also has a glass prism. Since glass has a different index of refraction than air, the prismrefracts the light at different angles as it passes through the glass. Finally, there are three types of lenses: double convex, planar convex, and double concave. The convex lenses converge two incoming rays of light to a single point (the focal point of the lens) whereas the concave lens causes the divergence of a single light beam into two. Nicole had some background in optics, and her knowledge was very helpful as we were trying to come up with interesting shapes for the pieces. The challenge for the user is to combine these elements in such a way as to direct a beam of light from the laser to the bulls eye on the “goal” piece.
This idea makes use of an affordance of the vertical surface that may not be immediately obvious—because of the way we oriented the pieces it is impossible for the user to direct the laser beam up or down, potentially causing eye damage. This was one of the most difficult parts of the project. We realized that when we were laser cutting the bases for the lenses and the laser, we had to ensure that the centers were all lined up, and the laser beam was exactly parallel to the surface of the board. We discovered that even slight variations in the angle of the laser caused huge deviations in the way the light beam hit the various components, so Jenelle spent a long time holding the piece steady while the epoxy dried. Unfortunately, just as were finished making the pieces and testing them out, we learned about a disadvantage of the vertical surface. One of the lens pieces wasn’t properly connected to the magnet, and it fell and broke!
Meanwhile, Andrea took care of the software aspects of the project. We created images for each piece using CorelDraw and scaled them appropriately, which took longer than expected.
There are many extensions we’d like to make to this project. Ideally, the kit would include two lasers, so the user could practice using different combinations of the lenses to make them converge to a single point on the bulls eye. The laser pointer was expensive, so we only bought one for the prototype. Also, the board really should have walls around the sides, to take care of safety concerns for observers and also to make it easier for the user to see where the light is going if it is not hitting an object on the board. Finally, it would be awesome to create a software component that actually traced the path of the laser beam as it travelled around the different components.
Watch our video demonstration: http://www.youtube.com/watch?v=36MnMCg5DmI
Jenelle Wallace, Megan Elmore, Nicole Zu
As amateur (but enthusiastic) bakers, we decided on the idea of modeling yeast growth and fermentation. After a bit of research, we decided that the idea was much more complicated than we originally predicted. We knew that yeast undergo anaerobic respiration and produce carbon dioxide as a byproduct, so we figured that we would measure this with a CO2 sensor. At first, however, we thought that too many variables were involved—the rate of metabolism, the rate of yeast reproduction, the temperature, and the concentration of glucose. We struggled with this for awhile, thinking we might need to figure out a way to measure glucose (we considered buying a glucose monitor for diabetics) and wondering how to measure the rate of population growth for our yeast cells. Luckily, after discussing the problem at length, we realized that our thinking was too broad—since we were planning to use Active Dry yeast for the test, we were really only concerned with the period of time in which the yeast cells were reawakening from dormancy. With a little more research, we found that we could discount population growth, since yeast cells typically only double about every hour and a half (see http://bionumbers.hms.harvard.edu/bionumber.aspx?s=y&id=101310&ver=14&sbnid=104360&sver=11 for reference). We also realized that yeast undergo two phases when becoming activated—the first phase involves a fast increase in metabolic rate and the second involves the synthesis of relevant enzymes and is much slower (Source: http://www.lallemand.com/BakerYeastNA/eng/PDFs/LBU%20PDF%20FILES/1_19WATR.PDF). We decided to focus on the first phase and model carbon dioxide production only as a function of metabolic rate, assuming that all respiration was anaerobic.
In the end, the process of narrowing down exactly what we wanted to model was more difficult than expected, but the need for making simplifying assumptions, at least at first, when modeling a biological system was a good lesson.
Initially, writing the NetLogo code for our program was not too difficult. We decided that we wanted to write a program to predict the CO2 concentration in a bottle containing blooming yeast based on the metabolic rate of the yeast cells. In order to do this, the program would have to perform a linear regression once in every specified time perio
d, and then adjust the model’s value of the metabolic rate to fit with the real-world conditions. Megan completed the code for the regression while Jenelle wrote methods to set up our yeast-in-a-jar model.
Meanwhile, Nicole applied her superior electrical engineering skills to the problem of how to connect the PASCO CO2 sensor to the GoGo board and thus feed the data into NetLogo. Unfortunately, this turned out to be by far the most difficult part of the project. We figured out how the CO2 sensor worked by using it with the PASCO interface, but we had major difficulties connecting it to the GoGo board. First, wetook apart one of the PASCO connectors and tried to solder the wires onto pins that could be plugged into the GoGo board. This failed miserably. With Marcelo’s help and a little research of our own, we realized that because the CO2 sensor is not a simple resistor like many of the other sensors, we needed to have the analog input feed directly into the GoGo board. This meant removing the connection to the 33K reference resistor that is embedded in the GoGo board ports (the figure at right is from the GoGo board manual and shows the circuit setup)—we had to take the cap off of one of the components next to the sensor port.
Unfortunately, this epiphany was still not enough to get the sensor working. After extensive testing and frustration with the multim
eter, ELVIS adaptor, power source, and breadboard, we figured out that the problem was that no power was going through the sensor. With Jimmy and Paolo’s
help (Paolo actually called a friend who worked for PASCO to get some advice), we figured out that the sensor needed +12 V, – 12 V, and 5 V power sources simultaneously. Finally, when we connected the sensor to the GoGo board, we got meaningful output!!!
The model was not finished yet, however. We realized that the sensor was noisier than we expected, so we had to write some extra code to average out the readings over short time periods so that the readings wouldn’t be too jumpy. The final model shows graphs of the metabolic rate (rate of CO2 production) and the total amount of CO2 present over time.
Our model could potentially serve a role in many simple biology experiments. Students could measure the effe
cts of changing different variables (such as glucose concentration and temperature of the water) on the rate of yeast metabolism. Experiments with variablesaffecting plant growth and photosynthesis could also be performed, with our bifocal model changing values to accommodate various conditions.
Watch our video explanation of how the model works: http://www.youtube.com/watch?v=_uIPii2ojMQ
On Eisenberg’s “Mindstuff”:
“The modern desktop computer—like many other sophisticated devices in Western culture—is designed, effectively, to work like magic and to shut its users out of the culture of design and participation” (Eisenberg 19).
I’ve pondered this idea for a long time. I remember a conversation with a friend in which we discussed the fact that so few people in today’s world understand how even a fraction of the devices they use every day work. Imagine this farfetched scenario: what if someone created a virus that selectively killed off all the engineers who understand the inner workings of computers, televisions, and cell phones? Civilization as we know it today might very well collapse.
The situation begs the question: is the level of complexity found in today’s electronic devices really necessary and intrinsic to their function? Or have we created a culture of complexity around technology intended to shut those who are not members of this educated elite of scientists and engineers out? The “digital divide” is the media catchphrase that refers to the growing schism between those who use technology and those who do not have access, but perhaps there is an even bigger digital divide between those who really understand and could replicate such technologies and the rest of us. Furthermore, the high degree of specialization in such fields means that even very few engineers understand and can see the whole picture of any given technology.
What does this mean for education? I think that the way technology has developed since the 1970s has led to a patriarchal style of using technology in the classroom. If a computer is assumed to be much more complex (at least computationally) than a young child’s mind, then the child is encouraged to tease out the complexities of the computer’s software rather than utilizing the computer to explore and develop the complexities of the child’s own mind.
But how can we make the computer more interactive without going backwards? No one can deny that computers have undergone tremendous advances in the past several decades, and their complexity has naturally increased concomitantly. I think the solution is in Eisenberg’s idea that some unique power comes with manipulating things with one’s hands. Instead of sitting statically on the desk, a computer should be something that can be played with, poked, prodded, taken apart, and moved around.
On Abrahamson and Trninic’s “Toward an Embodied-Interaction Design Framework for Mathematical Concepts”
I was absolutely fascinated by the group’s description of their proportional learning activity, and I was so glad that I read the paper early this week before I went out to the fourth grade classroom I visit every week with Science in Service, a Stanford program run through the Haas Center. We try to engage the kids in really interactive, hands-on science learning activities, and this week we were teaching principles of the design process and structural engineering by having them test materials and build towers given certain constraints. We discussed the fact that triangles are one of the strongest shapes in a structure and had the kids test different materials with the “triangle test” for tension and compression. One of my students, a shy fourth-grader named Ladimir, absolutely could not see or understand the forces on the straws he had pinned together for the triangle test. Thinking that perhaps feeling the forces on his own body might help, I asked him to arrange his fingers in the shape of a triangle and taped them together. Then I applied force to the top of the triangle, just as we had been doing before with the straws. I could see the epiphany in his eyes almost immediately, and he correctly identified compression forces on the sides of the triangle, and tension along the bottom.
For me, this was an extremely vivid and timely demonstration of Abrahamson’s concept that learning activities should be “hands-in” rather than simply “hands-on”—sometimes a child’s body needs to become an integral part of the learning activity to facilitate true conceptual understanding.
This project spurred a lot of discussions about our own educational experiences as a child. Personally, I think I missed out on my childhood. Why was I never encouraged to play like this as a kid? Like I suspect many children from middle class backgrounds with educated parents, my childhood was very structured—I had school, swim practice, tae kwon do practice, chess club. There wasn’t much time left for open-ended exploration of things that interested me outside of these organized activities.
Learning through pure exploration with no guidance was very new to me. Throughout my formal education, my teachers have always erred on the side of highly structured activities. Perhaps the closest experiences I’ve had were with my father. As a child, he always encouraged me to develop a curiosity about the world and would lead my in explorations by encouraging me to ask my own questions, but even then I had someone who was more knowledgeable than myself to give me verification that my thinking was correct. This experience was completely different—we ended up working completely independently because the TAs were busy with other groups. It was very refreshing to move at our own pace and not be afraid of sounding ignorant when voicing our questions and speculating about the answers. However, it was also frustrating when I had a specific question—“what is this part and why is it here”—that I was not equipped to answer and had no one to turn to for guidance. I’m not sure if this is an innate personality trait or something ingrained in me by years of worksheets with definite black and white answers, but ever since I remember I’ve always been preoccupied with getting the “right” answer. Of course, I also care about the pathway to get there, but I always want some verification that I’m moving in the correct direction. With open-ended exploration, I worry that I may be meandering down a side path and completely miss the main freeway. But then again, there’s supposed to be some value in taking the road less travelled…
We approached the deconstruction of our Canon camcorder like a treasure hunt. I was almost dying of anticipation to see what was inside by the time we finally took out over fifty tiny screws that held the body pieces together. With each layer there were new surprises. Nicole took apart the lens container and we examined each piece as well as the tiny motor that slides the lenses past each other. I explored the many intricate gears inside the camera that allow the screen to rotate out on the side. I also spent over half an hour trying to pry the audio recording sensor (which we eventually used for our project) out of its tightly screwed metal container.
We also found several other potentially useful parts: a light sensor, speaker, and gears that could possibly be reused in another mechanical device.
For our repurposing project, we initially tried to incorporate the light sensor video recording device. However, the sensor input was too erratic—we couldn’t figure out how to capture its input in a reliable quantitative manner. This was probably a result of the sensor’s complexity; the GoGo board could not tease out the variations in its input.
After trying that, we decided on the audio sensor for our final device. Though we were initially confused by the four wires (rather than only two) connected to the sensor, one of the TAs helped us solder them correctly and connect each pair to a different input channel. We rigged up a simple car, the Visible Hand, which moves by means of audio input. In keeping with the spirit of the project, the car is put together only with reusable materials such as putty and tape, and can easily be repurposed again to create another project.
We decided to work together on a new NetLogo project to try uniting our respective interests: neuroscience (Jenelle) and network science (Megan). Jenelle suggested that we try modeling long-term potentiation (LTP), a process by which neurons in the brain become more responsive to one other upon repeated, rhythmic stimuli. LTP is implicated in the formation of memory, and we thought that it would be awesome to apply the power of agent-based modeling to this interesting, complex process!
We started by brainstorming about how neurons should interact. In the body, a neuron is composed of the soma, or cell body, the dendrites, which receive information, and the axon, or shaft along which action potentials (electrochemical signals that convey information) travel. The axon has terminals that release neurotransmitters, and the soma has dendrites that receive neurotransmitters—this combination allows neurons to form synapses, in which an axon terminal of one neuron is associated with the dendrites of another in a complicated network of interactions. We decided almost subconsciously to rearrange the parts of neurons and synapses into two new categories: “neurons” which consist of a cell body, and directed “links,” whose source is the “neuron” sending the information (or more biologically, an axon terminal on one side of the synapse) and whose destination is the “neuron” receiving the information (or the dendrites on the other side of the synapse). We were surprised, on further recollection, that we so naturally modeled the actors of signal transfer rather than trying to model whole neurons themselves! Perhaps this is because Megan is a computational-network-thinker or because NetLogo makes it easiest to interact via turtles and links, not link-ish behavior rolled into a turtle.
We then puzzled for awhile about how we would model the rules of neuron communication and LTP within our NetLogo program. Jenelle described how neurons actually work: they aggregate the chemical signals being received on their dendrites, and once enough signals have been received (once we surpass a certain threshold, which we called the action potential threshold), a single electric signal passes down the axon. In LTP, if a neuron is stimulated by one particular axon terminal enough times consecutively at a certain rate, that synapse becomes more sensitive, lowering the threshold of chemical activity that the neuron needs to generate action potentials in the future. We decided to model this through the links, by having them start as “weak links” (using the breed feature of links in NetLogo) that keep track of when they receive action potentials and counting up how many occur spaced that the proper rate. Once a weak link has been activated in this way enough times, it becomes a “strong link” and sends a greater numerical amount of stimulus to the downstream neuron on each subsequent action potential. We aggregate stimulus at each neuron by adding the stimulus signal on each of its in-links, and neurons “fire”, upon receiving a level of stimulus above a set threshold, by telling their outgoing links to be active on the next clock tick. (We decided to separate “this” and “next” ticks of network activity so signals don’t propagate immediately through the network in one tick. This level of indirection also means it doesn’t matter in which order we process the nodes [great for unordered agent sets in NetLogo] because we only process all of one level of input before having it propagate to neurons downstream of active synapses.)
We did a few other programmatic tricks to make the network look and act real. Jenelle coded in refractory periods for the neurons, meaning they can’t fire action potentials one after the other because the properties of ion channels in real neurons prevent neurons from being tonically active. Megan automated the network’s initial structure and display by modifying this process from what was in the Virus Network model (Figure 1 shows the initial setup of the network). Jenelle figured out how to run experiments in BehaviorSpace, and Megan added the plots of network properties over time. It added up to a fair amount of code, and now has translated into a long blog post!
After a long development cycle, we’re happy to see the convers ion of links in our network from strong to weak and the emergent trend of large strong-link growth when a hub neuron (a high-degree node; one with lots of outgoing links) becomes activated by a few strong links ahead of it (notice the spike in total link activity which is correlated with the spike in strong link activity in the graph in Figure 2). If you slow down our model, you can see the propagation of signal from neuron to neuron, spaced out because of their refractory period, which looks pretty rad. We also noticed that if neurons are connected in a cycle, and the cycle is long enough to avoid neurons not firing when excited because of their refractory period, we get strong, permanent feed-back loops, which could correspond to interesting recurrent activity in the brain!
For our BehaviorSpace experiment, we tried varying the action potential threshold to determine what effect this had on the percentage of links in the network that became strong after a predetermined amount of time (1,000 ticks). Somewhat surprisingly, there was a huge standard deviation between runs, meaning that the random initial state of the network has a large effect on its behavior. Luckily, BehaviorSpace makes it possible to aggregate the data from a large number of trials. The results of the experiment (shown in Figure 3 which was created in Excel after we imported the BehaviorSpace text file) were somewhat surprising—intermediate action potential thresholds gave the highest percentage of strong links. Upon reflection, this does seem to make sense from a neuroscience perspective—the refractory period means that neurons cannot fire continuously even when the threshold is low, so low thresholds likely just increase the noise in the network. High thresholds, of course, lower the probability that the neurons will fire, so links take much longer to increase in strength.
We really enjoyed developing and refining our model design because it made us think very concretely about how cellular neurons interact and whether our simpler model reflects that. During our coding sessions, Jenelle commented often how articulating the LTP process in words and code was helping to solidify her own understanding of the process. Megan loved using her network-theoretical and programming skills to build a project in a problem domain she hadn’t known about before.
Note: The NetLogo file for our project as well as the text file for the behavior space experiment are located in Jenelle’s coursework dropbox.
;; Simple long-term-potentiation (LTP) model!
;; Authors: Megan Elmore and Jenelle Wallece
;; We’re using as a template the code from the Virus on a Network (VN) model.
breed [input-neurons input-neuron]
breed [interneurons interneuron]
directed-link-breed [strong-links strong-link]
directed-link-breed [weak-links weak-link]
firing_rate ;;amount of time between successive ticks for a neuron to add to its activation count
strong_stimulus ;;stimulation increases by this much when a signal is sent on a strong synapse
weak_stimulus;;stimulation increases by this much when a signal is sent on a weak synapse]
refractory_period;;number of ticks for which a neuron won’t send output even if it reaches the action potential threshold]
stimulation ;;counting up all of the excitatory inputs we’ve received this tick
[set last-activation-time -1]
[set last-firing-time -50] ;;initializes the value of last-firing-time so it won’t interfere with the rest of the code
set action-potential-threshold 3 ;;value of stimuluation that a neuron needs in order to fire
set strong_stimulus 3 ;;stimulation increases by this much when a signal is sent on a strong synapse
set weak_stimulus 1 ;;stimulation increases by this much when a signal is sent on a weak synape
set firing_rate 2
set refractory_period 2
;; this is copied almost verbatim from the VN model – I liked how they
;; set up the nodes to be randomly positioned but not too close to the edges
set-default-shape interneurons “circle”
; for visual reasons, we don’t put any nodes *too* close to the edges
setxy (random-xcor * 0.95) (random-ycor * 0.95)
set color pink
;; again, taken in large part from the VN model – it makes a lot of sense
;; to have a randomized but spatially structured initial network because
;; neurons in the brain link to neurons they’re physically close to
let num-links (average-neuron-degree * number-of-neurons) / 2
while [count links < num-links ]
ask one-of turtles
let choice (min-one-of (other turtles with [not link-neighbor? myself])
if choice != nobody [ create-weak-link-to choice ]
; make the network look a little prettier
layout-spring turtles links 0.3 (world-width / (sqrt number-of-neurons)) 1
;; setup everyone’s initial in and out nodes too
foreach (sort interneurons) [
let curr-neuron ?1
ask curr-neuron [
;; add an out link if we don’t have one
if (count out-link-neighbors = 0) [
let choice (one-of (other turtles with [not link-neighbor? myself and (distance myself < 5)]))
if choice != nobody [create-weak-link-to choice]
;; add an out link if we don’t have one
if (count in-link-neighbors = 0) [
let choice (one-of (other turtles with [not link-neighbor? myself and (distance myself < 5)]))
if choice != nobody [create-weak-link-from choice]
set-default-shape input-neurons “circle”
create-input-neurons number-of-input-neurons [
set color red
let good-pos false
while [not good-pos] [
set good-pos true
setxy (min-pxcor + max-pxcor * 0.05) (random-pycor * 0.95)
foreach (sort other input-neurons) [
if distance ?1 < 3 [
set good-pos false
ask input-neurons [
let num-links ((random (number-of-neurons * 0.10)) + 1)
while [count my-out-links < num-links ] [
let choice min-one-of (interneurons with [not link-neighbor? myself]) [distance-nowrap myself]
if choice != nobody [ create-weak-link-to choice ]
ask n-of 2 input-neurons[
ask my-out-strong-links [
set activated-next strong_stimulus ]
ask my-out-weak-links [
set activated-next weak_stimulus ]
ask interneurons [
set stimulation 0
;; aggregate activation of all input signals
ask my-in-links [
let link-stimulus activated-now
ask end2 [
set stimulation (stimulation + link-stimulus)
;; now I’ve aggregated all of my input signal
;; if it’s strong enough, I want to activate all my outgoing links (unless the neuron was
;; activated on the last tick).
if (ticks – last-firing-time = refractory_period and not (last-firing-time = -1)) [set color blue]
if (stimulation >= action-potential-threshold and (last-firing-time = -1 or not (ticks – last-firing-time = refractory_period)))
[set last-firing-time ticks
let did-activate false
ifelse stimulation > action-potential-threshold [
set color white
set did-activate true
[set color (15 + stimulation * 10)]
ask my-out-strong-links [
ifelse did-activate [
set activated-next strong_stimulus
[set activated-next 0]
ask my-out-weak-links [
ifelse did-activate [
set activated-next weak_stimulus
[set activated-next 0]
ask weak-links [
ifelse activated-now > 0 [
set color green
ifelse (not (last-activation-time = -1) and (ticks – last-activation-time = firing_rate)) [
set last-activation-time ticks
set activation-count (activation-count + 1)
[set last-activation-time ticks
set activation-count 1
if activation-count = memory-threshold [
set breed strong-links
set color green
set thickness 0.4]
[set color red] ;; show us which links are not activated per turn
ask strong-links [
ifelse activated-now > 0 [
set color green
[set color red]
ask links [
set activated-now activated-next
set activated-next 0
set-current-plot “Strong Link Activity”
ifelse (count strong-links > 0) [
plot (count strong-links with [activated-now > 0]) / (count strong-links)
set-current-plot “Total Link Activity”
plot (count links with [activated-now > 0]) / (count links)
plot (count strong-links with [activated-now > 0]) / (count links)
set-current-plot “Strong links”
plot (count strong-links)
In their article, Smith and Conrey make the claim: “The generative explanations offered by ABM provide a deeper understanding of the phenomenon than do statistical explanations that simply observe that in general…a particular regularity (e.g., a correlation) is found” (91-92).
I found this quote particularly intriguing because it really drove home for me the power of agent-based modeling and clarified the situations in which it might be useful. Our current science education system has ingrained in me the idea that mathematical descriptions of various phenomena are somehow inherently better than any other type of explanation. Certainly, equations have their place and offer a unique opportunity for scientists to codify what they know in a universal language. But even mathematical laws have their limitations—they cannot tell anything about what the individual components in a scientific or cognitive process are doing at any particular time. So mathematical descriptions and agent-based modeling are not competing but rather really completely different complementary approaches to modeling reality.
Abrahamson and Wilensky even suggest that agent-based modeling could be used as a lingua franca which “enables researchers who otherwise use different frameworks, terminology, and methodologies to understand and critique each others’ theory and even challenge or improve the theory by modifying and/or extending the computational procedures that underlie the model” (5). Perhaps sometime in the future agent-based modeling could even rise to a similar status as mathematics in terms of its use as a common language to promote interdisciplinary research, especially in the social sciences.
Of course, I worry about the ability of agent-based modeling to really be the final answer in describing any particular phenomenon. To me, it seems that its power is really in the exploration it allows and the questions it stimulates.
This model allows the user to explore evolution. Sunflowers are randomly set up on the board at the beginning and allowed to grow. The user can select either asexual or sexual reproduction and specify the mutation rate. For each generation, the user then clicks on one flower (for asexual reproduction) or two flowers (for sexual reproduction), and a new generation is created from the parents, incorporating random mutations. The screenshot below shows the sunflowers after twenty generations of evolution towards a tight spiral shape. One thing that surprised me is how quickly the makeup of the population can change, especially with asexual reproduction based on one individual—this is similar to the concept of genetic drift.
This model simulates the spread and perpetuation of different types of viruses in the population. The user can change the settings on the sliders to modify the infectiousness of the virus, the chance that a person who has been infected will recover, and the duration for which the person is infectious. To explore this model, I changed the slider settings to model several well-known viruses. The screenshot below shows a model of AIDS—the virus has low infectiousness, there is no chance for a person infected to recover, and the duration is very long. I was surprised to see that this model of infection caused a much higher proportion of the population to become sick than the settings for a virus such as Ebola that has very high infectiousness but very low duration. Also, it was interesting to note that the model for Ebola reached an equilibrium in terms of the individuals in each class (sick, immune, healthy, total) while the classes in the AIDS model continued to oscillate indefinitely.
This model of climate change simulates incoming light from the sun, which may either be absorbed by the earth as heat or reflected as infrared rays back into space. Carbon dioxide molecules in the atmosphere reflect some of that infrared light back to the earth through the greenhouse effect. The amount of light absorbed or reflected by the earth depends on the earth’s albedo and the brightness of the sun, which can be set by the user. One thing of note that I learned during my exploration is that changing the albedo of the earth’s surface has a larger-than-expected effect—since it affects all of the rays that hit the earth, it seems to have an even larger influence on global temperatures than carbon dioxide.
To extend the model, I added two important influences on the carbon dioxide content of the earth’s atmosphere—vegetation and humans, and I also dealt with the interaction between the two. In my program, the user can add and remove as many trees and people as desired. The trees perform photosynthesis, removing carbon dioxide molecules from the air if they hit them. They also reproduce at a specific rate. The people burn fossil fuels and add carbon dioxide to the air (the amount depends on the user-defined parameter carbon-footprint). They also remove trees at a rate that depends on the logging constant, also set by the user.
One important phenomenon that I discovered is the well-known idea of positive feedback. Adding more people to the model has an exponential effect on the temperature because people cause more carbon dioxide to be put into the atmosphere and they also cut down the trees, which then take up less carbon dioxide, dealing a double whammy to the earth’s climate.
There are many more extensions I’d like to make to this project. It would be really interesting to model the albedo of different surfaces such as glaciers and forests and to see how the temperature changes as the proportions of the earth covered by each type of surface changes. Of course, in order to do this, sun rays would have to hit the entire surface of the earth rather than just a small part, like in the original program.
Overall, this was a great introduction to agent-based modeling. With just a few additional methods, I was able to greatly increase the complexity of a model of something as important as global climate change.
In their paper “Thinking in Levels,” Wilensky and Resnick write, “[agent based modeling] makes systems-related ideas much more accessible to younger students by providing them with a stronger personal connection to the underlying models.” Importantly, students can use the agents in a computer program as “objects to think with,” in Papert’s words.
Though I can definitely see the value in this strategy for engaging younger students in understanding complex systems, I worry about the possibilities for students to incorrectly attribute intentions to the inanimate objects in their systems. I believe that this is often a serious source of young scientists misconceptions about many different processes. I’m involved with a program here at Stanford called Science in Service which teaches science to middle school students through one-on-one mentoring and hands-on activities, and I’ve repeatedly seen this phenomenon in action.
During one lesson on evolution, I was working with Israel, a bright fourth-grader. We were modeling the idea that different organisms have a variety of distinct adaptations to their environments by using spoons, chopsticks, and tweezers (the “birds”) and marbles, Styrofoam balls, and paperclips (the different types of “seeds”). Each utensil is specifically suited to picking up a particular type of object. When I extended the simulation and asked Israel what might happen if the marbles and Styrofoam balls disappeared from the environment and only the paperclips were left (which were easiest to pick up with the tweezers), he gave me something like this explanation:
“The birds with the spoon and chopstick beaks want to be able to pick up the paperclips, so they will adapt to their environment.”
This is a classic example of attributing attentions to agents which operate by processes that are in fact random. Evolution happens because spontaneous random mutations arise in organisms, causing differences in individuals which changes the composition of the species as a whole. Thus, in essence, this is also a problem with levels—confusing the intentions of individuals with those of entire species.
My concern is that if students begin to identify with the agents in their models, they will become attached to their individual fates and begin to model with a goal in mind. To me, the wonderful thing about agent-based modeling is that it’s nearly impossible to predict macro level outcomes based on the rules for individual agents. Perhaps with the correct guidance from teachers, students can begin to differentiate between driven and random processes and learn the true value of open-ended exploration.
To me, the most important thing about bringing technology, especially computer programming, into the classroom, is the opportunity it allows for educators to help children realize the dual power of thinking and ideas.
Learning the power of thinking means that children learn how to practice metacognition—thinking about thinking. As Feurzeig points out in his paper “Programming Languages as a Conceptual Framework for Teaching Mathematics,” programming provides concrete examples for such abstract thinking skills as “plan before acting,” “decompose the problem,” and “find a related problem.” It also assists with classification, generalization, and distinguishing between when formal rigor is necessary versus times when looser thinking is sufficient. Finally, programming teaches debugging, a necessary skill in almost any application where one is likely to make mistakes and experience the need to find and correct them.
Learning the power of ideas means using technology in such a way that it empowers ideas and allows children to discover them on their own. For example, Papert discusses the example of a child who did poorly with grammar because she could not see the value in classifying parts of speech as nouns, verbs, and so on. This idea was presented to her by a teacher in such a way that it was disempowered—she could see no practical use for it and therefore was resistant, either subconsciously or consciously, to learning it. However, when she was presented with the entirely different task of making a computer produce poetry, the need for classifying words based on parts of speech became immediately obvious—otherwise the computer would produce a jumble of nonsense. Another student had a similar experience with the idea of zero—she could not understand why her teacher talked about the “discovery of zero” until she discovered it herself through the necessity of using a zero speed in a computer program.
Of course, it is possible to teach the power of thinking and ideas without technology, but computers offer a straightforward way to do it. And as class sizes increase, giving each child a computer will be cheaper and more feasible than giving each a computer. I see this as the third benefit to using computers in the classroom—programming offers each child a personalized learning experience, an environment in which they can experiment, test their ideas, and receive feedback.