Tag Archives: media

Processing Data Visualizations

closeup of CPU chip

I’ve seen a lot of interesting data visualizations lately but have struggled to figure out how to visualize my own data.  It seems like there is a vast chasm between creating pie charts in Excel and Hans Rosling’s TED Talks.  The I stumbled upon Processing.

Processing was used to create the genetics simulation I described in an earlier post.  After looking into it some more, I learned that Processing was developed out of a project at MIT’s Media Lab.  It is an object-oriented programming language conceived as a way to sketch out images, animations and interactions with the user.

Examples of of Processing projects include everything from a New York Times data visualization of how articles move through the internet and visually representing data in an annual report to more esoteric and artistic works.

To get started, download the application at http://processing.org and go through some of the tutorials on the site.  There are lots of examples included with the download so you can also open them up and start tweaking and hacking them, if that’s your preferred method of learning.  Once your code is complete, or after you’ve made a minor tweak, click on the play button to open a new window and see it looks.  Once you’ve completed your project, you can export it as an applet, which can be uploaded to a web server, or as an executable file for a Mac, Windows, or Linux computer.

I’ve been through the first half-dozen tutorials and am to the point of making lines and circles dance around.  I can even make the colors and sizes vary based on mouse position.  I have also opened up some of the more advanced examples and started picking away at them to see what I can understand and what I still need to learn more about.  Once I can import data from an external source, it will be really exciting to see the different ways to represent it.

I haven’t had a foreign language learning experience in a while.  I am learning (and re-learning) many valuable lessons as I try to express myself in this new language.  Not surprisingly, I’m finding that I need a balance between instruction (going through the tutorials) and practice / play (experimenting with the code I’m writing or hacking together).  I’m also a bit frustrated by my progress because I can see what can be done by fluent speakers (see examples, above) but am stuck making short, choppy utterances (see my circles and lines, which really aren’t worth sharing.)  I plan to both work my way through the basics (L+1) as well as dabble with some more advanced projects (L+10) to see if I can pull them off.  If not, I’ll know what to learn next.

Fortunately, I have one or two friends who are also learning Processing at the same time.  They are more advanced than me (in programming languages, but I hold the advantage in human languages), but it has been helpful and fun to bounce examples and ideas off of one another.  We plan to begin a wiki to document our progress and questions as they arise — a little like a students notebook where vocabulary and idioms are jotted down so they can be reviewed later.

Watch for more updates as projects get pulled together as well as notes on other ways to visualized data in the near future.

Leave a comment

Filed under Projects

Open and Kinect

open kinect

A few days ago, I wrote about how the new Microsoft Kinect has been hacked so that you don’t need an Xbox to use it.  There are now lots of tinkerers and hackers working with this hardware to see what else might be possible.  Although it’s not as easy to see the immediate applications for Kinect in the language classroom as it was for the Wii-based interactive whiteboard, there are obvious parallels.  And this new gaming hardware is more advanced than the Wiimote, which may offer more possibilities.  I’ve posted some examples of some interesting Kinect-based projects below.

How does it work?

Infrared beams, and lots of them.  Here’s how it looks with an infrared / nightvision camera.

Multitouch IWB

Because Kinect can “see” surfaces in 3D, it can be used to create a multitouch interactive whiteboard on multiple surfaces.

Control your browser

Forget your mouse.  Kinect can see the gestures you make in three-dimensional space.  Use gestures to control your browser and more.

Teach it

Teach it to recognize objects.  Obviously, there is a lot more software in use here, but Kinect provides the interface.

Digital puppets

Who wouldn’t want one of these?

Visual camouflage

In 1987, the movie Predator cost $18M.  A significant portion of what was left over after paying Arnold Schwarzenegger was likely spent on the cool alien light-bending camouflage effects.  Just over 20 years later, you can make the same effects on your computer using the $250 Kinect hardware.

3D video

At first glance, this looks like really poor quality video, but stick with it.  Notice the Kinect camera does not move, but with the flick of a mouse, the point of view can be changed as Kinect extrapolates where everything is in the space based on what it can see from where it is.  The black shadows are where Kinect can’t see.

Using 2 Kinects, most of the shadows are filled in.  The effect is like a translation of the real world into a low resolution Second Life-like environment.

3 Comments

Filed under Inspiration

Locative Media And Other Mashups

These are the slides from a presentation I made this morning at the Digital Media in a Social World Conference.  More examples, including some that were generated during the presentation, can be found in the links I tagged using Diigo and Delicious.

I’ve tried to gather as many examples of digital mashups (see Wikipedia definition #2) that, in many cases, use maps or other visual means to represent different sets of data.  Do you have a favorite example that I didn’t include?  Leave it in a comment.  I’d love to see more!

To learn more about the conference, check out the #DMSW hashtag on Twitter.

2 Comments

Filed under Uncategorized