Sam King and Rachel Lopatin
We started off our final project brainstorm by determining that we did NOT want to make a product that taught kids about science. Rachel in particular noticed that most of educational tools and scaffolding discussed over the course of the quarter were designed to teach kids math and science concepts. What about the “fuzzier” stuff? Rachel decided music would strike a good balance because it can be very mathematical (rhythm, etc.) but also has an emotional and artsy component that math/science doesn’t. Sam agreed to work with her on something related to music.
We brainstormed the first week. We were inspired by the Mario Teaches Typing game and wondered if we could make a similar game, but use notes to control Mario instead of keyboard keys. We thought that trying to reimplement the Mario game with music, however, would be pretty hard. We also thought about an interactive computer interface where you could take care of a virtual plant over time and play different notes to water it, give it fertilizer, etc. That also led into our final idea: instead of trying to create a particular game or interface where you use music to make stuff happen on the screen, create a general game controller that used pitch instead of the normal controls that can be used for any game.
We decided that we’d need to work on a few components: the pitch recognition library so that we could determine what note the user actually inputs, a program that would take the note and simulate a key press, a window so that the user could set the mappings between notes and controls, and a feedback window to let the user know how close he/she got to a particular note (this would be based on the input frequency).
Sam already knew about AutoHotkey, which simulates keypresses, and wrote a nice script for it over the weekend. Check. Because we didn’t have the background knowledge to create our own pitch detection library (and because time was short enough that writing one from scratch would take 2 weeks by itself), we searched for open source libraries. Surprisingly, despite pitch recognition being used even in simple mobile apps, we couldn’t find code online…or we found code but no documentation so we couldn’t figure out how to use it…or the library hadn’t been updated in 3 years and we weren’t even sure it was correct. Finally we decided on FMOD, a framework that contains an API for pitch detection and manipulation. Awesome! The library was written in C, so our GUI (i.e., the windows to have the user set the mappings and give the user feedback on notes) would also have to be written in C or C++ so that we could combine the library with the GUI into 1 application. Not so hard, right? We did some Google searching and pretty much everyone said that Qt, a framework designed to streamline the prototyping and testing of GUIs, was the way to go. We were on our way, this would be smooth sailing…
…until we realized that just because Qt was the best option out there, it did not mean it was actually good. The documentation existed, but none of it was designed for complete beginners like us. We spent 2-3 days installing the software (Rachel’s computer also conveniently died at this point so we had to get a loaner computer) and trying to integrate it with Visual Studio 2010. The Qt website explicitly said that the newest version of Qt is compatible with VS 2010, but we had linking problems, unrecognized libraries, required .dlls that just didn’t get imported, etc etc etc. Meanwhile, Rachel was able to use Qt on its own–kind of. The plus side was that it was really easy to use Qt’s “Design” feature to create a GUI window and drag and drop the buttons and text labels you want on there. The minus side was that it didn’t compile like it was supposed to, so no GUI window ever launched. Rachel spent 2 days reading forum posts. First, she realized that MinGW, a program that simulates a Linux compiler for Windows, was required. It was listed as an option in the Qt installation, but it was under the “miscellaneous” tab, and nothing ever said it was necessary to make Qt work. Grrrrrrr. Okay, so once MinGW was installed, it still didn’t work. Getting an error message like “Error 2: File not found: ” was not helpful, especially since there weren’t any files missing and it was a compiler issue. Eventually Rachel, frustrated, decided to just start another project from scratch to see if that would compile. This one worked! What happened? Still have no clue but best guess is because the original GUI was created under the “Quick project” option, which screwed something up. By contrast, the second project was created under the “New widget” option. Beware!!! Very subtle bugs!!!
Sam was making no progress on the Visual Studio end, so decided to uninstall VS2010 and install VS2008 instead. Lo and behold, 2008 works very smoothly with the Qt plugin. We’re not sure why 2010 was so broken; it might actually be Microsoft’s fault rather than Qt’s. Even so, a step-by-step configuration guide to install the Qt plugin and get it to work with VS2010 would have been helpful so we didn’t waste 10 hours troubleshooting it.
From there, it was a simpler case of coding the interfaces. We already had the “look” of the GUIs through the drag and drop feature, but the buttons weren’t interactive yet. By Googling a LOT of tutorials and searching through layers and layers of (mostly unhelpful) documentation on the Qt website, we were able to write functions that retrieved text from the input box and learned about slots and signals, Qt’s version of using callback functions. While we understand why slots/signals are good design strategy, it was confusing at first to figure out how they worked, and in particular the syntax to invoke them.
The final challenge was figuring out how to make shapes be drawn on the screen using code. We needed some sort of dynamically changing indicator in our feedback window so we could show people what note they actually hit (like, 2/3 of the way from an A to a B). The drag and drop interface was wonderfully easy to figure out for the buttons in the Qt library, but you couldn’t drag a circle onto the screen, for example, and once placed on the screen it was static (couldn’t be animated/moved around). Because putting a circle on the screen was a huge headache to program (although we thought it was the superior option from a design standpoint), we settled for a horizontal line with labeled increments for the different notes, and a bold vertical line + a label that would move to show you what note you just hit.
After that, Sam did some awesome ninja coding to get the library to work in conjunction with the GUI, and we did some testing and searched for optimal programs to showcase our mappings. We wanted programs where you didn’t have to move too fast, because it’s pretty hard to get the pitch correct at first. We chose a Neopets game and a paint program. Both had one-key shortcuts that we could control with pitch. We would have liked to be able to support two-key controls, but AutoHotkey doesn’t support them without a lot of tweaking. We decided it was beyond the scope of this project but would be a cool extension.
Overall, we thought it was a pretty cool project. Although we didn’t get a chance to test it with any kids, we did notice that Sam, who had no musical background, got WAY better at hitting pitches after practicing with our program for a couple days.
tl;dr Qt sucks but we managed to make a program that lets you control computer programs using pitch instead of the keyboard.
Finished program pics:
Sketching and design along the way:
My opinion is that there are too many relevant factors that determine student performance to capture in an accurate model. Creating a model that replicates a certain general pattern (like a link between SES and grades) would fail to validate the model unless it had high predictive ability.
This is yet another post the blog seems to have eaten. (The big deal here is that I typed all of them directly into the blog textbox, so I do not have copies on my hard drive to reference and upload.
I don’t remember exactly what I said but I think the thrust of it was that I was intrigued that there wasn’t any talk of programming itself as a good tool by which to learn about levels. Decomposing and keeping you datatypes clean is basically necessary to get a program to work, and there are lots of bugs that pop up levels. The debugging of which can be painful but will soon teach you to think flexibly, and be aware of how aggregate activity on a low level can effect a higher one &c. Though the difficulty faced by the CS students in explaining traffic is a little disheartening, it only goes to show how difficult a concept it is to generalize with fluency.
The point of the paper, that we make a move towards new content as well as new media in classrooms, was a very good one. And instruction of levels seems like it would actually help many people learn facility with new media itself, which is a nice little feedback loop.
Also my initial take on the, “I hear that’s why there’s so much traffic on the highways in Los Angeles…” comment was a joke, a parody of the misconception. Still possible?
I found it somewhat disappointing that, for all his tribute to MacLuhan, Kay didn’t attempt this paper in a more interesting, less linear style. It’s always funny to read an analytical written argument about the limitations of analytical written arguments. He makes his point well, but I think it would’ve been even better demonstrated. I’m not saying that he ought to have gone ahead and implemented the interface he dreams up, towards the end of the paper, the cure to dead and static writing, “which can only argue.”
“Facts” retrieved in one window on a screen will automatically cause supporting and opposing arguments to be retieved ina halo of surrounding win-dows. An idea can be shown in prose, as an image, viewed from the back and the front, inside or out. Important concepts from many different sources can be collected in one place […] Computers can go beyond static representations that can at best argue
But I do kind of wish he had tried to do a little service to his opposition, and even more so, I wish that the graphical elements in the paper were better integrated into the argument. He includes a graph of unrelated data for the sake of illustrating the form. Why not make it’s content synergize with the argument? Or at least try to incorporate it.
The interview transcripts were extremely interesting. I wonder how concepts break down differently in different cultures, it might be an interesting lens through which to compare them.
It’s also impressive that the children change their answers and formalize their concepts as a response on inquiry. That the interviewer can very visibly provoke learning from within, without having to provide any content on their own, only by asking, “Why?”
It’s also very interesting that, as the concept gets more abstract, fluency in its use and application is actually, at least for a time, inversely correlated with the ability to give it’s definition. This could be a general reflection of language. The better you understand something, the more appreciation you have for its complexity. Any complete definition is false.
Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.
But if any one of these is missing in an individual member of a species, we’re not going to say that it isn’t alive. A man who’s lost his reproductive capacity, for instance. Or a plant with a metabolic condition inhibiting its growth. &c. There is also the complicated question of whether or not viruses are alive but it’s likely too tangential.
Apparently the blog ate half of my posts, which is upsetting. At least, I can’t find most of them when I click the “mine” tab.
A quick summary of my reaction to the early Papert:
I thought he did a rather nice job of explaining the way in which a child’s mind is different from an adult’s. Particularly on the issue of conservation, with the eggs and the egg cups. He pointed out that saying that the child is “merely” confusing “space” with “number” actually reflects serious cognitive differences. Those concepts aren’t fully-formed yet, they aren’t fluent, they can’t be generally applied.
I happen to think I remember when my preschool teacher conducted one of these kinds of interviews. My one explanation for this is that I think I may have been teachers pet enough to remember every time I was corrected by a teacher before the age of 10. Once was, when I was 9, Ms. Mullin told me not to intentionally exclude a girl from my group of friends. Once was when, at the age of 8, I said that Nick Niu was older than me because his birthday came after mine. And once was at the age of 4 or maybe 3, in Montessori, being Piaget-ed. By then I had certainly learned to count, and I think if I had understood the egg problem as a counting problem there would’ve been no problem (as a proportions problem, maybe differently). But she was doing this with liquid: first in a shallow pan, then in a tall and narrow cylinder. Liquid isn’t physically discretizable into countable units. It’s a continuous quantity. Volume. Anyways. I was very surprised to have gotten the question wrong, because I was sure I was right.
The possibility of fab labs at schools across america makes the most sense when we consider that in very recent history most American High Schools had wood shop and home economics classes. Wood is now a nostalgic medium, which is why it’s being phased out of the picture. At least on the scale of High School woodshop. But you can start to produce modern, exciting, and innovative things, with limited start-up cost with laser cutters and 3-D printers. There seems to be a hole in our educational system for it to fill. Hopefully the parallel hole in our budget doesn’t preclude the filling.
On Abrahamson, I’m more or less convinced about the power of embodied interaction in education. His proportions interface could very well, after it is introduced, be used as part of a fractions curriculum to help solve equations. A new abacus. It may have Furthermore it seems like a relatively challenging task to think of the different ways we could concretize the myriad of concepts we’re used to thinking of as “abstract”
Rachel is a precocious 8-year-old who loves to draw and read. When we met with Rachel to talk about designing her artifact, we asked her to draw whatever came to her mind. She first started with things that she might like to have, such as a robotic sort of hand that could turn the pages of her books, or a hat covered in candy:
But when we talked to her a little more, she revealed that she most loved to tell stories. She drew for us an elaborate world comprised of egg-shaped characters called Puggles and an exquisitely detailed world in which a Puggle family would live.
What sort of artifact might we design Rachel to help her tell stories, we wondered? She had mentioned that her favorite toy was a card game called Whoonu, wherein players try to guess their fellow players’ favorite items by presenting them with candidates to be ranked. We thus decided to make a card game, called Storyboards, wherein the goal is to present not items but stories.
One central player, we decided, would have a “story card,” with some evocative image to serve as premise for a story. The other players would “complete” that story – how, we were at first not sure. We thought of maybe doing an ongoing sort of narrative thread like the telephone game, but dismissed that in favor of the simplicity and flexibility of starting each round afresh. We were uncertain with which content the players would complete stories – with an item? a phrase? a full story? In the end, we decided to leave this unspecified, to make the game more open-ended. Players would write or draw their proposed resolutions/next developments/punchlines on some scratch paper and hand them to the first player to be ranked. He whose ending was the favorite would get to draw next. We also toyed with the idea of computationally enabling the cards, e.g. by fiducials on their backs, such that the story once specified by the players could be read into the computer and animated, but decided it would be better to just make a card game so that it would be self-contained.
We based the cards upon a lovely, thick paper stock. With some experimentation at the controls of the laser cutter, we figured out how to etch images upon cards while simultaneously cutting them from the stock. For the images themselves, we recruited our friends within our dorms to draw all manner of strange and amusing images.
We designed a box in Illustrator and then cut its walls from wood again using the laser cutter. The walls were then assembled using wood glue. The box was designed with two components for story cards and blank cards (so that Rachel could define her own premises), as well as a space to hold pencils. (We could not find golf pencils on short notice, so we took regular full-length pencils, snapped them in half, and then sanded down the broken end.)
Rachel was overjoyed to be presented with her game. Her dad Ira has since told us that he thought that our product was the “most market ready and fully designed product of the class” and that she loves playing with it. While her family has an entire cabinet devoted to board games, Storyboards occupies a space of honor upon their table. Our design decisions to make Storyboards self-contained and constructed with quality materials paid off, in that it costs little to play or to bring with you, to keep in your thoughts or on your kitchen table. It is an objet d’art.
Instead of building a model, we built a game. Both users place their turtles in any of many pre-made worlds of “object” turtles and “recruits.” Users both get a set number of “dummy” turtles and “smarty” turtles. “Dummy” turtles bounce off obstacles and walls, causing every recruit they touch to become a dummy of their color. “Smarty” recruit dummies in the same manner, but they make a direct path for the first closest recruit.
We were inspired first by “Liquid Wars” a game in which two groups of turtle-like objects flock together and battle. If two turtles meet head-on, there is no effect. If one meets the back of the other, the one facing-away is converted to the color and under the control of the one pursuing it. The game is over when the turtles are all of one color, all on the same team. We ended up implementing a version of this. The biggest issue with this game is that, while the flocking-behavior seems to be determined somewhat emergently, most of the gameplay is very top-down. The game we’ve created is better in this respect. Because there is no live control over any of the turtles, the outcome of the game is largely emergent: the synergistic result of the limited, local rules of the smarties & the dummies.
We named the game “George” in homage to it’s muse, George Hokkanen.
Jenny Moryan, Darri Stephens, Jamie Diy, Nicole Roach
After only two very full weeks of hard work, TuneTrain has become a fabulous tangible toy for helping youngsters learn about rhythm and to create their own compositions. During our final project presentations, we received great feedback; users were enthusiastic about creating their own compositions and loved recording their final creations. The excitement surrounding TuneTrain was evident, but a few questions were also raised and suggestions were given.
A few users misunderstood how neighboring notes are played. Since the train reads notes using a touch sensor, the user cannot hear the difference between two neighboring short notes and one long one. While we were hoping these combinations would help users learn about mathematical equivalencies (i.e., that two quarter notes create the same note as one half note), the visitors expected the track to read like sheet music and were surprised to learn that consecutive notes would automatically combine. If the note pieces had each been slightly slimmer, and we had made room between each one, we could have alleviated such confusion.
The other issue we came across was the sensitivity and feedback of the light sensor in changing the pitch of notes. Users felt there was a disconnect between their hand positions (relative to the light sensor) and the pitch being played. Part of this disconnect comes from the fact that once the note is playing (the touch sensor is depressed), the pitch associated cannot be changed. We believe the other part comes from a slight delay between the hand motion and the change in pitch of the note.
The spacing between notes and relationship between the light sensor and pitch are two important directions for future work on TuneTrain. Other additions could include our initial idea of incorporating the sounds of multiple instruments via musical instrument “cards.” We’d love to explore the idea of new track mixing features. Due to the timeframe of this project, we were not able to test our prototype. We would love to test TuneTrain with children to see how our intended users actually interact with the toy. How do the children want to use it? What misconceptions do they have about rhythmic patterns? Toward which activities from the curriculum do they gravitate? How do they use TuneTrain to express themselves creatively? How can TuneTrain be improved to scaffold their learning? These are all questions that would guide future revisions.
For more details on the arduous process of making this toy, please see the post-in-progress.