Assignment 5 – Creating a Bifocal model

Circuit Components

Electronix SLATE toolkit, Jenny and Shima

0

 

Electronix

Jenny & Shima

As electrical engineers, we were really excited to create a tangible toolkit to teach circuits as this week’s SLATE assignment. Dealing with a circuit and analyzing its voltage and current is not a fun task for one who has problem diagnosing the basic electrical concepts. This Symbolic “one” unfortunately could be the representative for a large population of students even in higher education level.

Several facts has been counted as reasons for students’ difficulties in dealing with electrical concepts, among them there is one which is heard here and there most often, “It’s just too abstract!” Even in BBL session one of our TAs started to explain a simple circuit by saying “Current is an abstract entity has been created to make circuit analysis, possible!”

That is true for a considerable number of students; they do not have any real notion about current and voltage. For these students, they are just abstract concept represented just as numbers on their science problem, or ones to be read from a voltmeter. Even though a student gets close to picture DC current and voltage and makes them more concrete for himself; there is a great chance for him to be lost in AC world.

The tragedy discussed above is the reason for us to appreciate making a tool which makes learning electrical concepts more tangible and visual; the tool that brings circuit and its properties from abstract world close to the grounded concrete world.

As a first step of designing our Electronix toolkit, we drew basic circuit components with Coreldraw. For some of these components like resistors, it took awhile to design a satisfactory, artistic shape of them. Jenny redrew several components after finding that she needed to make them more consistent with the same size wire extending from the component. Finally, we came up with a consistent shape and design for each that is appealing to the eye. The next step was choosing color of the acrylic sheet. We decided that all components would be the same color except the lamp in order to unify the shape of our circuit.


 

 

Cutting out connectors for the components was our next step. However, this step was a little bit challenging; it was hard to set power and speed of laser cutter in a right level in order to have the optimal hole in the middle of a connector for a magnet. After playing with Coreldraw line setting and laser cutter setting for a while, we reached the connector with close to perfect hole for its magnet. Now, it was the time for us to assemble components on their connectors and put them on the board. You can see the result in the following picture.

Circuit Components

 

After having all our circuit components on their connectors, we designed several circuits with different difficulty levels, in order to scaffold our users through solving circuits. The very simple level consisted of a circuit with just one component besides the voltage source, and at each level a circuit gets more complicated, in a way that the final level is a circuit with several components, three of them located in parallel branches.

At each level, the associated circuit would be projected on the board for the player to fill in the blank parts of the circuit with any random component. After completing the circuit and starting simulation, the circuit’s electrical current will be simulated as a series of yellow balls of light traveling around the circuit with specific speed and density. Different elements of the circuit and the circuit structure itself will affect the balls’ speed and density. For example, most of current balls will rapidly pass through the branch with low resistance; just a few of them will diminish because of the electrical loss of a resistor. This electrical loss would be shown by some balls being popped. As the resistance of the circuit increase, the speed of balls will decrease, the more balls will be popped out, and hence the density of the whole string will also decrease.

Also, players could observe that without voltage source there would be no yellow lights without current. They can also see if they place diode reversely in the circuit, how diode will act as a wall, hinder ball movement. Furthermore, Electronix visualizes the distribution of the current in parallel branches of a circuit. As an example, if a user fill one of the parallel branches with wire and the other two with resistor, ball distribution demonstrates how almost all the current will pass through the wire branch which has very low resistance and the other two branches will have almost no current.

We hope with Electronix tool kits, learning the electricity and related concepts becomes more concrete for the learners. However, for SLATE assignment of this week, we were just able to project two different circuits on the board as a free play and challenge. Hence, we had to give up some of our circuit structures. Furthermore, as this assignment only includes the physical design of the toolkit, this prototype does not include the demonstration of the current as a series of lights on the board.

 

You can watch our demo here:


 

 

 

MadLibs: Enky, Demetric, and Kevin’s SLATE Toolkit

1

For this week’s assignment, we devleoped a MadLibs application using the Slate toolkit. We figured this would be a fun way for three to five year olds (and older kids who love playing MadLibs) a physical way to play the time-honored game. MadLibs are those games where children would insert specified parts of speech into blank spaces.  The game typically happens through predefined stories and spaces. For example, one player might say, “I need a verb, a noun, and two adjectives.” After another player gives the specified types of words, the reader then inserts the words into a story, which in turn, comes out in a (typically humorous) story that is user-customized. We seek to extend this type of game through the SLATE interface by allowing players to ‘rapidly’ prototype their stories.

 

MadLibs was designed with younger children in mind. Although many other products have taken on a more scientific spin, we decided to work with parts of speech for this particular project. We did this because our group had felt that a lot of English and Language Arts do have a fundamental place within the classroom and education; and that technology even utilized and created within the Fabrication Lab can do wonders for Language Arts education.  We envision that this game would be very useful because it will allow children to rapidly create their stories in a fun and friendly format. This, in turn, will help the children learn a lot more about the different parts of speech, which in turn will allow them to develop their own potentially latent creative skills in the arts of sentence construction, which in turn leads to creative writing and reading comprehension. The faster that children learn more about how words work, they can do better in understanding what the words mean. Furthermore, we feel that our product is fun, because it allows kids to make kooky and fun sentences.

 

Our game operates by having the learner position the noun, adjective, and verb wheels in their corresponding places on the top of the white space. Then, Demetric was able to help generate a sample MadLib, where he generated a simple story requiring the user to input two nouns, two verbs, and two adjectives.  For this particular project, we were unable to code the specified words into the turning of the gears, so we will have to Wizard of Oz the presentation. (However, we have came up with a funny story that in turn, will have funny and profound implications.)  The user will be able to turn the wheel (which in turn will point to different directions on said wheel, leading to a potential combination of four [or more, since the SLATE toolkit allows for relativistic rotation as opposed to actual rotation.]) The wheel, when spun, will come up with different words, which in turn, will change the word that ‘appears’ within the MadLib story.  We would have loved to put images of the nouns, verbs, and adjectives, because it would also appeal to younger children, which would make it a stronger and more interactive thing, but that was due a limitation of the engine.

 

One idea that we came up with, which we found potentially interesting, would have been having the player be able to record their own words via voice (much like Colin, Jain, and Shima’s Process Pad project.) It would be cool if one day, we could make this into a reality. We would also find it cool, if Tiffany wasn’t so constrained with her coding that is!, that we could have a random MadLib generator built into the program. We did love working with this particular toolkit, and if we had an extra week, I’m sure we would have been able to create something else new with it!

We also must note that we don’t have our video up because Tiffany still hasn’t sent us the coding that she was going to help us with in helping to finish the project. We will put our video up ASAP (most likely tomorrow.)

 

 

https://rapidshare.com/files/3345421771/IMG_0300.MOV –> file is too big to fit onto this site

 

marionette_pieces_2

Animatix

0

Our project this week is Animatix, a SLATE toolkit for learning how to pose and animate a marionette. A user-created marionette can be mounted by its limbs, or other connection points on the body, onto SLATE connectors and then posed on the SLATE board. A user of Animatix is shown a starting position for the limb connectors; then he or she place the marionette into this position; and next he or she is shown an “ending” position toward which to move the marionette. Ideally, as the user changes the marionette’s pose, he or she would save frames, or snapshots of all connector positions, and the series of frames recorded on the way to an ending position can be saved as a solution to a challenge and played back or displayed incrementally as hints to other users. 

We think this tool could be useful in art classes as a digitally-enhanced artist’s dummy; the user can not only reposition the dummy but also record and review sequences of movement. We use a bipedal, upright model for the marionette because its motion makes physical sense on the upright SLATE surface. Gravity also factors into our design because marionette pieces move into position by balancing the force of the SLATE connectors with the downward force of gravity. It may not make for a realistic simulation of human motion (at least, until we can build a better marionette that can look like it’s holding itself up and isn’t so strongly affected by gravity!), but it is useful for learning to control an actual marionette or puppet with strings and rods. At least, Animatix allows users to think about and act with movement in a semi-realistic physical system, and users reinforce their learning of the system’s physical affordances and constraints by recording and playing back animation experiments.

We started constructing this project by finding a suitable marionette model. We are currently using the model from http://www.scribd.com/doc/13606976/Marionette-Outline, having converted it into laserable form in CorelDraw using the centerline outline conversion for bitmap images. We prototyped the marionette design using cardboard and string, and realized quickly that 1) string doesn’t give enough support to the limbs, so we couldn’t “push” the limbs into non-hanging positions using string, and 2) tying the joints together using string considerably restricts joint movement. From these observations, we decided to build the final prototype out of acrylic, fasten the joints with brads (or rather, LEDs, since those were what we had on hand!), and attach the limbs to the connector pieces using acrylic rods that would support the “push” movements we could want from limbs.
We also fashioned the connector pieces out of acrylic, mounting a simple grip (an approximately thumb-sized rectangle welded to a half-circle on one side) with acrylic and vinyl labels onto the template connector provided for the magnetic base pieces. The pinch-able grips on the connectors afford rotation of the connector pieces, so users can position the ends of limbs into a small range of rotated positions as well as x-y translations. Our laser and vinyl cutter skills from the previous projects made this design work very straightforward, though we still had difficulty aligning the vinyl stickers and gluing acrylic pieces by hand. (We’re getting better with practice!) 

For the gameplay, we would like to have implemented code for saving and displaying piece positions, and the corresponding marionette position, one frame at a time. Coram gave us some great ideas for how to design this code and integrate it into the existing SLATE source, but we didn’t have enough time to figure out how to translate from the connector positions into positions for each marionette piece, much less how to get all the pieces drawn! Our current puppet also makes life a little difficult, because the same connector position can correspond to several different body positions based on where the limbs had been before the move and whether the user has upset the joints supporting limbs that may have been working against gravity.

Instead, we decided to use the challenges to hold all positions. The user can start in challenge mode, move their connectors to the correct positions, then rotate the challenge piece to show a new set of positions to which to move their connectors. [We accomplished showing the challenges by saving them in free play first, which drops the full set of positions into the solutions section of library.xml and just the lowest and highest pieces into the challenges sections. Then, we just pasted all of the solutions in as challenges! A quick hacky repurposing of the existing code...] The user cannot currently save frames of their marionette’s motion between the two challenge positions he or she picked, but let’s use our imaginations and interact with Animatix by moving the marionette into a sequence of poses that will reach the final position.

 

 

NetLogo Simulation

The Pitfalls of the Pendulum – Jeff, Dan, and Rosie’s Bifocal Model

0

Our model is designed to simulate a simple pendulum:

  • a point mass suspended by a rod or cord which is massless, inextensible, and always taut;
  • which moves in 2-D;
  • which does not lose energy to friction;
  • and which does not lose energy to air resistance.

The time it takes a simple pendulum to complete one back-and-forth oscillation is called its period.  Our model can be used to explore the relationship between the length of a simple pendulum and its period by examining the movement of the simulated pendulum as well as that of a physical pendulum sensed via a Gogo board.
The model allows the user to set the length and maximum angle of the simulated pendulum. In order to more accurately model the real-world behavior of a pendulum, it was necessary to relax the latter two principles described above. Thus, the model also allows the user to set the environmental friction (which encompasses both the mechanical friction the pendulum will experience through its rotation as well as the friction due to air resistance). This parameter is designed to be fit to the observed behavior of the physical pendulum.

 
The model is set up to run a series of trials with simulated pendulums of increasing length, dropping the pendulum from the specified angle each time. After each run the simulation plots the length of the pendulum against its period, calculated as the time necessary for the pendulum to pass through the vertical for the first time, multiplied by four.

 
During each run the model also monitors a light sensor attached to a Gogo board. This light sensor is positioned relative to the physical pendulum such that it is blocked when the physical pendulum passes through the vertical. Thus the model calculates and plots the period of the physical pendulum in the same fashion as it does the simulated pendulum. By adjusting the length of the physical pendulum’s string to move the pendulum between several pre-positioned light sensors, the user may adjust the physical pendulum’s length similarly to how the model adjusts the length of the simulated pendulum. Over a series of runs this plot describes the relationships between the lengths and periods of the simulated and physical pendulums, permitting the behavior of each pendulum to be examined singly as well as both to be compared.

 

The complete model: physical pendulum on the left, NetLogo simulation on the right.

 

Our first major challenge in this project was interpreting the mathematics of a simple pendulum within a dynamical context. We were able to find an equation for the angular acceleration of a simple pendulum easily enough, but were stymied by the constant factors present in the system – friction and gravity. What meaning did “9.8 m/s^2″ have within NetLogo? We needed to figure out how we could interpret a pendulum of length “2″ in terms of meters, as well as how gravitational force was distributed over the timesteps in our model.

 

While accounting for different time slices was a simple fix – simply multiply gravity by the timestep – length was a little more complicated. We eventually decided that we could preserve the relationship between length and period if we simply were consistent in how we incremented the length between tests – i.e. using the original length of the pendulum as our “standard unit”. We were then able to fit the friction to the real-world pendulum within a given run. We did not expect our simulation to perfectly model the real-world pendulum – and in fact would have been satisfied if the slopes of the curves had merely been similar, indicating that the same kind of period-length relationship held true for the simulated and real-world pendulums – but were pleasantly surprised to find that our simulation did achieve very good fit.

 

The simulation after a run has concluded. The periods of the simulated and physical pendulums have been plotted at left.

 
Our second major challenge was to verify the period of the physical pendulum. We had originally planned to position a light sensor at would be the vertical point beneath the pendulum as it swung overhead (first photo below). However, we could not detect the rapid transit of the pendulum given the size of the weight – it didn’t block enough of the ambient light. For this reason, we decided to mount a series of light sensors on the side of the pendulum’s support facing away from the support so that the pendulum would swing in front of them (right photo below). By building little “shields” around the sensors, we were able to detect the pendulum’s transit, especially when we augmented the light by holding a cell phone’s flash in front of the pendulum in order to increase contrast. Positioning the light sensors in this fashion also served as an easy way to mark the various lengths at which we were to test the pendulum.

 

Light sensor resting beneath the pendulum.

Light sensors mounted on the pendulum’s support

We created a BehaviorSpace experiment that tested the effect of varying string length on the period of the pendulum. The virtual string varied in length from 1 to 10 patches, in increments of 1, for a total of 10 runs. We found that the period was approximately 1 each time, which contradicts Huygen’s Law for the period: period = 2*pi *sqrt(string-length/gravity). Under this law, increasing string length should cause an increase in the period. Our model is therefore flawed, but at this point, we aren’t sure what the problem is.

 
In terms of educational usefulness, we think that the pendulum model (and GoGo Board / NetLogo bifocal modeling in general) are theoretically promising for teaching students about the process of science. The project nicely illustrated the challenge of building a model that explained real-world data, while carefully refining our physical mechanism and measurement methods to ensure that our data were reliable. Practically, however, it was sometimes very challenging to make NetLogo and the GoGo Board do what we wanted. These challenges distracted us from the scientific content, though they did illustrate the frustrating and fussy nature of real science.

 
Basic guidance on how to model a pivoting object and resize an object from its end rather than its middle was provided by the animated-spring demo from http://turtlezero.com/download.php. Many thanks.

2011-05-06_00-29-54_861

Sands of Time– Anne and Andrea’s Bifocal Model

0

video:

http://www.youtube.com/watch?v=HcHWotl34oY

Our model is a bifocal model that simulates sand passing through an hourglass (1/2 minute glass in our case). As you can see in photo 1, we built an hourglass from two 8-ounce water bottles, which we filled with sand.

Photo 1:

 

As you can see in photo 2, we lined the outside of the water bottle with light sensors. When sand builds up and blocks the light sensor, the output of the sensor changes. We hooked the 7 light sensors to the GoGo board and connected that to our NetLogo model.

Photo 2

Photo 3 displays our computer model. On the left-hand side, we have a graph the depicts the output of our 7 sensors. As a sensor is covered, the output peaks, which you can see on the graph. Also on the right-hand side we have controls that can adjust the density of the simulated sand and the probability with which sand will fall from the “sprout” which in turn affects falling sand rate. On the right-hand side, we have the simulated bottle. We simulated sand falling with brown patches that fell from the centerpiece and built up in a pyramid-esque sand pile. Importantly, there is more than one “sprout” for our falling sand. Additionally, as the sensors are covered by sand, they trigger the red dots to appear next to the bottle. These red dots show us that our models are synchronized.

Photo 3

 

The following pictures display the computer model at different stages of the falling sand simulation.

 

Photo 4

Photo 5

Also, the challenging part of model

In this project, our challenges fell into two categories: physical model challenges and netlogo problems. On the physical side, our initial problem was with how to measure changes in sand. At first we wanted to measure the sand by placing a weight sensor at the bottom of the bottle. This weight sensor did not work because it was square and the bottle was circular. We attempted to alter the bottle such that it would funnel only onto the square, but this did not work. Additionally, the weigh sensor did not linearly respond to additional sand. Thus, it was not sensitive enough for our purposes. Then, we tried placing several, smaller weight sensors on the sides of bottle in hopes that the sand would build up and trigger the pressure sensors. Again, the sensor was not sensitive enough. Then we tried light sensors on the side. Ultimately, these light sensors worked because they changed their output after the sand built up. We did face some challenges getting the light sensors to lie flat against the bottle. While glue did not work, substantial tape did. We also had to make sure that we put the bottle in direct lamplight because otherwise light differentially hit the light sensors. In the future, we would probably use bottles that did not have ridges in the sides as they created additional shadows and non-uniform area.

While our netlogo code did work, it was very buggy and the initial position of the turtles would sometimes be outside of hourglass. At office hours, we learned of two other models that were related to what we wanted to do. The first was the gogo model which let  us collect data directly from the gogo board so that we did not have to export it to netlogo later. The other was the sand model that used patches instead of turtles and just changed the color of the patch in order represent sand. By merging the code for the outline of the hourglass, the gogo code, and the sand code we were able to build the model that we wanted. The last challenge was getting the bar on the side to move as the sand did. The general idea was that if sensor X > threshold then the patches should be changed to red that correspond to that spot. After some more tweaking and and changing of dimensions, we got it to work.

In terms of application of the learning activity, we believe that this model could be useful for children to learn about differential rates. Our model is flexible in that we can easily twist apart the pieces of the bottle and substitute the inside materials. If we chose a finer sand, then the rate of falling would increase. Similarly, if we chose a coarser sand or a more viscous material, the rate of falling would likely decrease. Additionally, we could play with mixing materials and phases—liquids and solids, for example.

 

We uploaded our netlogo file to coursework :)

bread

Yeast Fermentation Bifocal Model

0

Jenelle Wallace, Megan Elmore, Nicole Zu

 

As amateur (but enthusiastic) bakers, we decided on the idea of modeling yeast growth and fermentation. After a bit of research, we decided that the idea was much more complicated than we originally predicted. We knew that yeast undergo anaerobic respiration and produce carbon dioxide as a byproduct, so we figured that we would measure this with a CO2 sensor. At first, however, we thought that too many variables were involved—the rate of metabolism, the rate of yeast reproduction, the temperature, and the concentration of glucose. We struggled with this for awhile, thinking we might need to figure out a way to measure glucose (we considered buying a glucose monitor for diabetics) and wondering how to measure the rate of population growth for our yeast cells. Luckily, after discussing the problem at length, we realized that our thinking was too broad—since we were planning to use Active Dry yeast for the test, we were really only concerned with the period of time in which the yeast cells were reawakening from dormancy. With a little more research, we found that we could discount population growth, since yeast cells typically only double about every hour and a half (see http://bionumbers.hms.harvard.edu/bionumber.aspx?s=y&id=101310&ver=14&sbnid=104360&sver=11 for reference). We also realized that yeast undergo two phases when becoming activated—the first phase involves a fast increase in metabolic rate and the second involves the synthesis of relevant enzymes and is much slower (Source: http://www.lallemand.com/BakerYeastNA/eng/PDFs/LBU%20PDF%20FILES/1_19WATR.PDF). We decided to focus on the first phase and model carbon dioxide production only as a function of metabolic rate, assuming that all respiration was anaerobic.

In the end, the process of narrowing down exactly what we wanted to model was more difficult than expected, but the need for making simplifying assumptions, at least at first, when modeling a biological system was a good lesson.

Initially, writing the NetLogo code for our program was not too difficult. We decided that we wanted to write a program to predict the CO2 concentration in a bottle containing blooming yeast based on the metabolic rate of the yeast cells. In order to do this, the program would have to perform a linear regression once in every specified time perio

d, and then adjust the model’s value of the metabolic rate to fit with the real-world conditions. Megan completed the code for the regression while Jenelle wrote methods to set up our yeast-in-a-jar model.

Meanwhile, Nicole applied her superior electrical engineering skills to the problem of how to connect the PASCO CO2 sensor to the GoGo board and thus feed the data into NetLogo. Unfortunately, this turned out to be by far the most difficult part of the project. We figured out how the CO2 sensor worked by using it with the PASCO interface, but we had major difficulties connecting it to the GoGo board. First, wetook apart one of the PASCO connectors and tried to solder the wires onto pins that could be plugged into the GoGo board. This failed miserably. With Marcelo’s help and a little research of our own, we realized that because the CO2 sensor is not a simple resistor like many of the other sensors, we needed to have the analog input feed directly into the GoGo board. This meant removing the connection to the 33K reference resistor that is embedded in the GoGo board ports (the figure at right is from the GoGo board manual and shows the circuit setup)—we had to take the cap off of one of the components next to the sensor port.

Unfortunately, this epiphany was still not enough to get the sensor working. After extensive testing and frustration with the multim

eter, ELVIS adaptor, power source, and breadboard, we figured out that the problem was that no power was going through the sensor. With Jimmy and Paolo’s

 

help (Paolo actually called a friend who worked for PASCO to get some advice), we figured out that the sensor needed +12 V, – 12 V, and 5 V power sources simultaneously. Finally, when we connected the sensor to the GoGo board, we got meaningful output!!!

The model was not finished yet, however. We realized that the sensor was noisier than we expected, so we had to write some extra code to average out the readings over short time periods so that the readings wouldn’t be too jumpy. The final model shows graphs of the metabolic rate (rate of CO2 production) and the total amount of CO2 present over time.

Our model could potentially serve a role in many simple biology experiments. Students could measure the effe

 

cts of changing different variables (such as glucose concentration and temperature of the water) on the rate of yeast metabolism. Experiments with variablesaffecting plant growth and photosynthesis could also be performed, with our bifocal model changing values to accommodate various conditions.

 

 

 

 

Watch our video explanation of how the model works: http://www.youtube.com/watch?v=_uIPii2ojMQ

raindrops1

Bifocal Modeling– Rain

0
Jenny Moryan, Colin Meltzer, Nicole Roach 

The model that we created this week is a simulation of rain drops on a window. We wanted to develop an agent-based model for which we would be able to create a similar physical model. We began by programming the NetLogo model to randomly display raindrops across the screen as shown below.

When the model runs, the raindrops within radius of 1 combine to form larger raindrops and move down the screen. As they move down the screen, the pen draws its path, just as raindrops leave trails. An example of a final image is below.
As we began to consider how to make this a bifocal model, we decided that we would use the video extension in NetLogo. We had many problems attempting to get this to work. Jenny began trying to add the extension on her PC while Colin tried to get it on his Mac. We both had trouble trying to make the extension work with NetLogo 5.0 beta. Colin finally determined that it would work with NetLogo 4.1.3, so we worked on his computer. Once we finally got the video extension to work, we had issues with having the webcam work correctly. We later realized that Colin had left his Photo Booth application open and this affected which webcam NetLogo was looking for. 

Once we had the camera functioning, we worked to find a good color for the raindrops so that we could see it on white acrylic with the camera. We experimented with blue, pink and finally determined that the green “rain” would show up the best. We also worked to change the contrast and resolution of the model to see how much of the rain water would appear on the screen. Below is an image of the green rain during our testing.

After we tinkered with this, we tried to get the color to appear as patches on the model. Determining the correct way to use pcolor, with RBG and HSB to best differentiate between the white and the green took some testing. We first tested the range of numbers by clicking on the screen and displaying the RGB and HSB. Based on these results, we created a function that would find the green patches, and place a “raindrop” there. So, instead of placing dots randomly on the screen, our model would create the raindrops in the same spots as the physical model. 

Below are images of the screen captures for our NetLogo model. As you can see, the first image shows the green raindrops on the white acrylic. (Note that the original set up button and number-of-raindrops slider are also there but were not used with the physical model)

Once setup-rain is hit, the blue dots appear in the same location as the green dots so we could test the accuracy of our model as seen below.
Since we wanted our model to be consistent, we attached the webcam to the acrylic so that we could turn the camera with it and show the actual model behind the NetLogo simulation in real time. Below is a picture during our construction of the webcam post.
Once this was constructed, we placed tape around the edges of the viewing area so we could know exactly where to place our raindrops for the simulation.
During our testing with the actual model, we realized a few things that changed our NetLogo model. We realized that sometimes the drops would slow down and not reach the bottom of the board because they were losing liquid when they created their trail. We also realized that sometimes the paths of the drops would merge. We tried to add functions to our simulation to account for the changes we saw in the real world. 

Link to Video:
http://youtu.be/QjRYfo38O3Q

**The Behavior Space and Netlogo code are on the coursework drop box**

 

 

photo 1

Kevin, Demetric, and Enky’s BiFocal Model: Sound and Distance

0

Brainstorming: The three of us were slightly confused on what project to do. Science, after all, was a really broad subject, and our model has to accomplish something relatively tiny. We were stuck between making an animal behavior simulator, a chemistry function simulator, or physics function simulator. However, Demetrich was able to pitch an idea related to sound and the measurement of sound behaviors. We were able to agree on measuring distance.

 

We programmed a quick model on Sound and Distance.

 

Our model currently tests the Sound Intensity Inverse Square law: where I= 1/r2, where r is equal to the distance between the sensor and the source of sound. This particular relationship forms an inverse relationship. Notably, as the distance increases, the intensity of the sound (in decibels) goes down. This inverse relationship makes sense: as a person who is screaming really close to your ear will produce a much louder sound then the equivalent scream while you are standing a certain distance away. Our model attempts to replicate this phenomenon. We had some initial problems being able to make it linear versus exponential, but we were able to resolve most of these issues in time.

 

Our construction: We made our model slide-able away from the sensor. The source of the sound can be moved away at a distance from the sensor. The lines and numbers on the acrylic ruler are not arbitrary; they correspond to the inches on a real ruler.  The data from the sensor then ports over to NetLogo; we have two settings in the program. One is a simulation of how the data is SUPPOSED to work; the program also allows us to take data that is routed from the sound sensor through the GoGo board into the computer and, through netLogo, we are able to pipe our data in order to make it work.

 

The formula for the sound sensor is s = 530 – 8d^3. It was the best fit formula for the data we collected. The value drops faster as we go further away from the sensor. Initially, it drops a little bit. Later, it drops a lot when we increased the distance. The d^2 is not good enough to catch the drop. Therefore, we used the d^3 to accurately predict the sounds sensor measurements.

 

Movie Link to Rapidshare:

https://rapidshare.com/files/2860368207/IMG_0301.MOV

 

Some pictures of the process

 

formula

Bifocal Model

0

We came up with an idea of making a bifocal model that would read ultraviolet light, and represent the data in a Netlogo model to show how UV would affect the human body.  For that we decided to make an accessory one could wear outside that would give you an immediate feedback on the amount of sun light you are being exposed to. This real-world data regarding sun exposure will automatically change parameters in the NetLogo model, where on the human body shape, colors of turtles change as the skin cell are affected by UV lights.

Inspired by the recent British royal wedding and their wacky hat designs, we thought it would be fun to design the sensing device look like a hat.

Before programming the Netlogo model we did some research on ultraviolet sensing and sunscreen protection. Based on this research, the variables we considered for the model include :
1. Time of exposure
2. Density of light
3. Amount of sunscreen needed that actually applies ( depends on Phototype and areas of body)
4. Areas of the body
5. Type of skin

We established the following to divide body sensitiveness:
- face and neck
- arms and torso
- legs

And inspired by how the SPF is calculated from measured data as

 



For simplifying the above model and translate it to some agent based formula we consider that there are three basic skin types :
- 1.Too sensitive ( represented by “ts” in model)  2.Sensitive ( represented by “s” in model ) and 3. less sensitive ( represented by “ls” in model )
also, we consider that there are three major type of skin protection and used number representation in model for distinguish them.
- 3 =best protection, 2 = normal protection and  1 = less protection

Based on these assumption, we consider that for “ts” skin type the best protection  will the “3” and if people with the “ts” skin type use the strongest protection, 3 in this case, their skin will endure twice amount of ultra violet more than the condition without any protection but if they do not use the most appropriate protection, the protection effects will considerably decrease, in a way that if a person with too sensitive skin uses the less strongest , “1”, protection, it will like that he is not using any kind of protection.
Also with other types of skin, if people use the stronger protection than they need,  the protection they use will have the same effect as the appropriate one, for example with people having sensitive skin type although the “2” is the best protection, the “3” will also has the same effect.

Making the physical object was a lot of fun. We looked into several hat designs, and our favorite inspiration was the following:

 

 

 

 

 

 

 

 

Unfortunately we were not able to get an ultraviolet light sensor, so we ended up using a light sensor to mock up.  And to make it work for our project we set thresholds for direct sunlight, verses other forms of light as well as a different thresholds when sunlight affects different parts of the body

 

 

 

 

 

 

 

 

 

 

 

To make the structure of the physical object we hooked up a headband onto a round piece of foam board that we pre-cut with the laser cutter. We placed a gogo board on the foam board and covered it, then connected a light sensor on top of it. After that, we decorated the whole piece to make it look like a hat.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We had some challenges connecting the board to our Netlogo model using the gogo board extension, but at the end we were able to get it to work. It would have been useful to have a pre-made tutorial for connecting gogo board and Netlogo. Also after connecting the gogo board, we discovered that as the sun intensity increases, the out-put of light sensor will decrease, it was exactly opposite to the logic behind the model, in modeling we had considered that increase in the light intensity would lead to increase in out-put Netlogo received from gogoboard. So, this discrepancy called for changes, not very revolutionary ones, in our model.

Another thing we dealt  with during coding was finding the best order for several “if statements”. Due to the nature of the model, the out-put received from sensor should be checked by numerous “if statements” and without well-defined order of these statements, we could not receive satisfying out-put from Netlogo model . For solving this problem, we got back to “flow chart” solution which was really helpful and enabled us to figure out the order that made the model properly working. However, after all, we learned although some methods help you to reach the appropriate result within fewer iterations, but it is impossible to avoid iteration and revising totally in a task like programming.

Here are some screenshots of our Netlogo model and programming.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

“BehaviorSpace results (NetLogo 4.1.1)”
“sun danger6.nlogo”
“experiment”
“05/06/2011 05:40:18:911 +0430″
“min-pxcor”,”max-pxcor”,”min-pycor”,”max-pycor”
“-16″,”16″,”-16″,”16″
“[run number]“,”1″,”2″,”3″,”4″,”5″,”6″
“sunexpouser-rate”,”0″,”100″,”250″,”350″,”450″,”700″
“[reporter]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]”
“[final]“,”650″,”0″,”0″,”0″,”0″,”0″
“[min]“,”0″,”0″,”0″,”0″,”0″,”0″
“[max]“,”650″,”0″,”0″,”0″,”0″,”0″
“[mean]“,”433.3333333333333″,”0″,”0″,”0″,”0″,”0″
“[steps]“,”2″,”2″,”2″,”2″,”2″,”2″

“[all run data]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]“,”count turtles with [statuse = 1 ]”
,”0″,”0″,”0″,”0″,”0″,”0″
,”650″,”0″,”0″,”0″,”0″,”0″
,”650″,”0″,”0″,”0″,”0″,”0″

 

Our model could be part of a learning activity.  People could  learn about what is UV, how it works and what body type they are in order to understand the risk that they can be in while being exposed to the sun.

 

Here is the link to the video:

http://screencast.com/t/4MwBev73ccg

 

Shima, Daniela, Jain

Go to Top