Tag Archives: learn

Processing Data Visualizations

closeup of CPU chip

I’ve seen a lot of interesting data visualizations lately but have struggled to figure out how to visualize my own data.  It seems like there is a vast chasm between creating pie charts in Excel and Hans Rosling’s TED Talks.  The I stumbled upon Processing.

Processing was used to create the genetics simulation I described in an earlier post.  After looking into it some more, I learned that Processing was developed out of a project at MIT’s Media Lab.  It is an object-oriented programming language conceived as a way to sketch out images, animations and interactions with the user.

Examples of of Processing projects include everything from a New York Times data visualization of how articles move through the internet and visually representing data in an annual report to more esoteric and artistic works.

To get started, download the application at http://processing.org and go through some of the tutorials on the site.  There are lots of examples included with the download so you can also open them up and start tweaking and hacking them, if that’s your preferred method of learning.  Once your code is complete, or after you’ve made a minor tweak, click on the play button to open a new window and see it looks.  Once you’ve completed your project, you can export it as an applet, which can be uploaded to a web server, or as an executable file for a Mac, Windows, or Linux computer.

I’ve been through the first half-dozen tutorials and am to the point of making lines and circles dance around.  I can even make the colors and sizes vary based on mouse position.  I have also opened up some of the more advanced examples and started picking away at them to see what I can understand and what I still need to learn more about.  Once I can import data from an external source, it will be really exciting to see the different ways to represent it.

I haven’t had a foreign language learning experience in a while.  I am learning (and re-learning) many valuable lessons as I try to express myself in this new language.  Not surprisingly, I’m finding that I need a balance between instruction (going through the tutorials) and practice / play (experimenting with the code I’m writing or hacking together).  I’m also a bit frustrated by my progress because I can see what can be done by fluent speakers (see examples, above) but am stuck making short, choppy utterances (see my circles and lines, which really aren’t worth sharing.)  I plan to both work my way through the basics (L+1) as well as dabble with some more advanced projects (L+10) to see if I can pull them off.  If not, I’ll know what to learn next.

Fortunately, I have one or two friends who are also learning Processing at the same time.  They are more advanced than me (in programming languages, but I hold the advantage in human languages), but it has been helpful and fun to bounce examples and ideas off of one another.  We plan to begin a wiki to document our progress and questions as they arise — a little like a students notebook where vocabulary and idioms are jotted down so they can be reviewed later.

Watch for more updates as projects get pulled together as well as notes on other ways to visualized data in the near future.

Advertisements

Leave a comment

Filed under Projects

Genetics for Kids

test tubes

Over the past ten or twenty years, the news media has become saturated with stories about genetics.  But do you really understand how genes interact?  A new genetics simulation being developed at Ohio State can help.

The simulation begins with a series of cartoon faces from which the user can choose to populate the gene pool for the next generation.  (The term “parents” is used, but more than two can be selected.)  This process can be repeated several times to create successive generations of cartoon faces.

Over 50 “genes” are incorporated into the faces (affecting everything from the dimensions of the head and other features to how asymmetrical the face is and whether the eyes follow your mouse or not) and the genes of the “parents” interact to produce the subsequent generation.  You can also adjust the amount of mutation, which leads to a wider (or narrower) variety of offspring.

Another interesting feature is the ability to view genotypes.  This allows you to view a graph under each offspring representing which genes come from which parent.  You can also choose two faces and drag them to the Gene Exam Room to view to what degree each gene is represented in each face.  This also allows you to see the effect of each individual gene.  You can even increase or decrease the representation of each gene to see how it changes each face.

What can you (or your students) do with this simulation?  Imagine the faces are puppies and you want to develop a new breed that is cute (or whatever other trait you’re interested in.)  This simulation clearly demonstrates how breeders (of animals, plants, etc.) select for certain traits and refine them over generations.

Or imagine the choices you  make in the simulation are not choices, but represent the effects of the environment.  For example, say the Sun grows dim giving people with big eyes that can see in low light an advantage over people with small eyes.  This advantage results in a higher percentage of offspring surviving and a wider representation in the gene pool.  What effect would this have after several generations?

Think of how much richer students’ discussions of designer pets and natural disasters will be after they have “experienced” the process instead of just reading about it.  In addition to genetics, this simulation can also stimulate interest in probability (how likely are offspring to have certain characteristics), design (ideas behind evolutionary design were the impetus for the interface), as well as all of the social issues behind decisions we are now able to make regarding genetics.

In terms of ESL teaching, I think giving students something interesting to do and then having them talk or write about it is a great way to get them to practice English.  This genetics simulation is simple but interesting enough that it could generate lots of interesting ideas for students to talk about.

1 Comment

Filed under Resources

Edupunk Eye-Tracking = DIY Research

One of my favorite presentations at the 2011 Ohio University CALL Conference was made by Jeff Kuhn who presented a small research study he’d done using the above eye-tracking device that he put together himself.

If you’re not familiar with eye-tracking, it’s a technology that records what an person is looking at and for how long.  In the example video below, which uses the technology to examine the use of a website, the path that the eyes take is represented by a line.  A circle represents each time the eye pauses, with larger circles indicating longer pauses.  This information can be viewed as a session map of all of the circles (0:45) and as a heat map of the areas of concentration (1:15).

This second video shows how this technology can be used in an academic context to study reading.  Notice how the reader’s eyes do not move smoothly and that the pauses occur for different lengths of time.

Jeff’s study examined the noticing of errors.  He tracked the eyes of four ESL students as they read passages with errors and found that they spent an extra 500 milliseconds on errors that they noticed.  (Some learners are not ready to notice some errors.  The participants in the study did not pause on those errors.)

The study was interesting, but the hardware Jeff built to do the study was completely captivating to me.  He started by removing the infrared filter from a web cam and mounting it to a bike helmet using a piece of scrap metal, some rubber bands and zip ties.  Then he made a couple of infrared LED arrays to shine infrared light towards the eyes being tracked.  As that light is reflected by the eyes, it is picked up by the webcam, and translated into data by the free, open-source Ogama Gaze Tracker.

So, instead of acquiring access to a specialized eye-tracking station costing thousands of dollars, Jeff has built a similar device for a little over a hundred bucks, most of which went to the infrared LED arrays.  With a handful of these devices deployed, almost anyone could gather a large volume of eye-tracking data quickly and cheaply.

Incidentally, if you are thinking that there are a few similarities between this project and the wii-based interactive whiteboard, a personal favorite, there are several: Both cut the price of hardware by a factor of at least ten and probably closer to one hundred, both use free open-source software, both use infrared LEDs (though this point is mostly a coincidence), both have ties to gaming (the interactive whiteboard is based on a Nintendo controller; eye-tracking software is being used and refined by gamers to select targets in first-person shooters), and both are excellent examples of the ethos of edupunk, which embraces a DIY approach to education.

Do you know of other interesting edupunk projects?  Leave a comment.

5 Comments

Filed under Inspiration

Teaching with Google Images

canoes on google image search

In a recent meeting with the executive council of our student association, one of our class representatives suggested organizing a canoe trip.  Judging by the puzzled looks around the boardroom table, many students did not recognize this word.  So, I pulled up Google Images and did a search for canoe.  The results were similar to what you see above.  Instantly, students could understand the word and the discussion could continue.

I really enjoy the challenge of working with a group of students with a wide range of ability.  Using Google Image search is a good way to help level the playing field so that students can communicate with each other more efficiently.  If you have a projector and internet access in your classroom, images can be pulled up very quickly as a teaching aid.

A word of caution, though.  Be sure to set the Safe Search setting to “Use strict filtering” if you are doing a search in front of a whole class in order to reduce the chance of objectionable images appearing.  And be aware that even strict filtering is not 100% perfect.  So, if you are working with a group that is young or particularly sensitive to certain images, be ready to hit the back button immediately or, better yet, mute the image on the projector until the search comes up, preview the images, and then make the projection available to the class.

Once you begin using it, Google Image search is the kind of simple tool that you will wonder how you lived without.  While there are certainly benefits to having students define unknown terminology for each other, there are also times when you just want to provide a few words to define a term and move on.  In these cases, an image search is worth a thousand words.

6 Comments

Filed under Resources

Gestural Interfaces and 2-Year-Olds

In the video above, a dad asks his son to draw something on a new iPad, the ubiquitous Apple tablet.  The 2-year-old clearly has some facility with the device as he casually switches between apps and between tools within the drawing app.  Interestingly, (though not surprisingly for anyone with a 2-year-old,) the boy also wants to use his favorite apps including playing some pre-reading games and watching videos.  He very naturally fast-forwards through the video to his favorite part.  He also knows to change the orientation of the device to properly orient the app to a wider landscape format.

Although I like gadgets, I’m not a true early adopter.  I do carry a PDA — an iPod touch — which my 2- and 4-year-olds enjoy playing with.  It’s amazing how quickly they understand gestural interfaces, pinching, pulling and tapping their way from app to app.

While I don’t think that I need to rush right out and get my kids iPads so they don’t get left behind, (the whole point is that they’re easy to use anyway,) I do wonder about some of the interesting opportunities for learning on these devices: drawing, reading, and linking information.  Of course, they also do a lot of these things on paper which places far fewer limits on their creativity — instead of choosing from 16 colors in a paint program, they can choose from 128 crayon colors or create their own by mixing their paints.

In the end, this new technology is flashy and fun, but I’m not convinced that iPads and other tablets are essential tools that will give our kids and our students a clear learning advantage.  I sure would like one, though.

2 Comments

Filed under Inspiration

Building Blocks 2.0

pile of cell phones

If I told you we were going to play a game by stacking a bunch of smart phones and moving them around, you might get a picture in your head like the one above.  But there is actually a simpler, more fun way to go about this.

Last weekend, I discovered Scrabble Flash in the toy aisle of my local grocery store :

Each of the five game pieces is a small, location-aware blockwith a screen that displays a letter.  By rearranging the blocks, words are formed.  The blocks are all aware of each other, so they can tell you when you have them arranged to spell a word.   Several different games can be played with this remarkable little interface.  Apparently, Scrabble Flash was released in time for Christmas last year, but I didn’t notice it until now.  For about $30, I may have to pick this up for myself.

When I first saw Scrabble Flash, I thought it might be a commercial manifestation of Siftables, a similar interface designed by an MIT student that I wrote about a couple of years ago after seeing this TED talk.  It turns out that Siftables are now Sifteo:

Both Scrabble Flash and Sifteo are block-like computers that are aware of the others in their set.  Scrabble Flash is not as robust with only three games available on the monochrome display.  But it is available now and the price is reasonable.  Sifteo blocks are full-color screens that are motion sensitive and connect to a computer wirelessly, which means more games can be downloaded as they are developed.  But they won’t be available until later this year and I suspect the price will be higher than Scrabble Flash.

Is this the future of language games?  That would be a pretty bold prediction.  But clearly as we all become more accustomed to using apps on our smartphones, these kinds of “toys” will begin to feel like a very familiar technology.  Scrabble Flash is an affordable entry point, but I’m excited that Sifteo is actively seeking developers to create more games.  They already have several learning games but there is potential for many more.

Leave a comment

Filed under Resources

Visual Thesaurus

visual thesaurus word cloud

As a visual language learner myself, I really like the way Visual Thesaurus.com works.  Enter a word and synonyms, antonyms, and other related words appear on spokes around a hub.  Lines show relationships between the words (red dotted lines indicate antonyms, gray dotted lines indicate when a word is an attribute of another, is similar to another, is a type of another word, etc.) and definitions, color coded according to part-of-speech, fill a column to the right.

Thesauruses are very useful tools, but displaying results visually makes it even more so.  Other online thesauruses like Thesaurus.com organize search results in a more conventional way that is reminiscent of paper-bound versions: Columns of words are grouped by part-of-speech and meaning.  Why not display these relationships in a way that makes their relationship intuitive and more immediately obvious?  Thesaurus.com is also cluttered with lots of banner advertising and, interestingly, a link to Visual Thesaurus.com at the bottom.

In fact, I had thought I had seen visual thesaurus-style search results somewhere else on Google, but all I’ve been able to find is a now-defunct Google module that seems to have been the basis for Visual Thesaurus.com.  Surely other applications could also benefit from a similarly visual approach, but I don’t know of many.

Visual Thesaurus.com is not free, but keep reading.  A subscription to the online edition is available for $2.95 per month or $19.95 per year while a desktop version is available for $39.95.  I’m not sure I use a thesaurus often enough to justify the expense, though it would be a nice resource to make available to students (group and institutional subscriptions are also available).

In my experience, after the three free searches non-subscribers are allowed, I can close the window and get three more free searches immediately.  Aren’t you glad you kept reading?  Although opening and reopening the search window is inconvenient, it seems to have slaked my appetite for synonyms so far.  You’ll have to decide whether you want to pay for greater convenience, but Visual Thesaurus.com is a useful tool either way.

Leave a comment

Filed under Resources