How Earth Made Us is a documentary series produced by the BBC. Like many BBC programs, the cinematography is spectacular. But, perhaps more interesting, is the approach the program takes to history. Instead of only examining human interactions, the program focuses on how natural forces such as geology, geography, and climate have shaped history. And, the whole series is available on YouTube.
In the first episode, Water, host Iain Stewart explores the effects that extreme conditions have had on human development. He visits the Sahara Desert, which receives less than a centimeter of rainfall each year, and Tonlé Sap, which swells to become the largest freshwater lake in southeast Asia during monsoon season. The contrast is striking. One interesting factoid is that the world’s reservoirs now hold 10,000 cubic kilometers of water (2400 cubic miles). Because most of these reservoirs are in the northern hemisphere, they have actually affected the earth’s rotation very slightly.
The second episode, Deep Earth, begins in a stunning crystal cave in Mexico, in which crystals have grown to several meters long. The cave, which is five kilometers below the earth’s surface, was discovered by accident when miners broke into it. I can’t imagine what they thought when they first set foot inside.
The third episode, Wind, explores the tradewinds which spread trade and colonization, which lead to the beginning of globalization. This brought fortune to some who exploited resources and tragedy to others who were enslaved. The view from the doorway through which thousands of Africans passed on their way to the Americas is a chilling reminder of this period of history.
Fire, the fourth episode, moves from cultures that held the flame as sacred, to the role of carbon in everything from plants to diamonds to flames. And carbon is also the basis of petroleum, which has powered the growth of humankind. Several methods of extracting crude oil around the world are explored.
The final episode, Human Planet, turns the equation around tying the first four episodes together by looking at how humans have had an impact on the earth. One of the most compelling examples is the Great Pacific Garbage Patch which is the result of ocean currents bringing plastic and other debris from countries around the Pacific rim. This garbage collects, is broken down by the sun, and eventually settles to the bottom to become part of the earth’s crust. This is juxtaposed to rock strata in the Grand Canyon, pointing out that eventually, one layer of rock under the garbage patch in the Pacific will be made up of this debris.
In all, there is almost 5 hours of documentary video here. It is a compelling production with spectacular imagery. There are any number of ways to use these videos with an ESL class. And because they are available on YouTube, there are even more options available to an ESL instructor. Instead of everyone watching together in the classroom, the videos can be posted in an online content management system and students can watch them anywhere, anytime on their laptops and smartphones, if they have access to that kind of technology. And if the videos are being watched outside of the classroom, there are more options for assigning different groups of students to watch different videos and then have conversations with students who watched different episodes. The ubiquity of online video can bring learning to students outside of the classroom.
Processing was used to create the genetics simulation I described in an earlier post. After looking into it some more, I learned that Processing was developed out of a project at MIT’s Media Lab. It is an object-oriented programming language conceived as a way to sketch out images, animations and interactions with the user.
To get started, download the application at http://processing.org and go through some of the tutorials on the site. There are lots of examples included with the download so you can also open them up and start tweaking and hacking them, if that’s your preferred method of learning. Once your code is complete, or after you’ve made a minor tweak, click on the play button to open a new window and see it looks. Once you’ve completed your project, you can export it as an applet, which can be uploaded to a web server, or as an executable file for a Mac, Windows, or Linux computer.
I’ve been through the first half-dozen tutorials and am to the point of making lines and circles dance around. I can even make the colors and sizes vary based on mouse position. I have also opened up some of the more advanced examples and started picking away at them to see what I can understand and what I still need to learn more about. Once I can import data from an external source, it will be really exciting to see the different ways to represent it.
I haven’t had a foreign language learning experience in a while. I am learning (and re-learning) many valuable lessons as I try to express myself in this new language. Not surprisingly, I’m finding that I need a balance between instruction (going through the tutorials) and practice / play (experimenting with the code I’m writing or hacking together). I’m also a bit frustrated by my progress because I can see what can be done by fluent speakers (see examples, above) but am stuck making short, choppy utterances (see my circles and lines, which really aren’t worth sharing.) I plan to both work my way through the basics (L+1) as well as dabble with some more advanced projects (L+10) to see if I can pull them off. If not, I’ll know what to learn next.
Fortunately, I have one or two friends who are also learning Processing at the same time. They are more advanced than me (in programming languages, but I hold the advantage in human languages), but it has been helpful and fun to bounce examples and ideas off of one another. We plan to begin a wiki to document our progress and questions as they arise — a little like a students notebook where vocabulary and idioms are jotted down so they can be reviewed later.
Watch for more updates as projects get pulled together as well as notes on other ways to visualized data in the near future.
Over the past ten or twenty years, the news media has become saturated with stories about genetics. But do you really understand how genes interact? A new genetics simulation being developed at Ohio State can help.
The simulation begins with a series of cartoon faces from which the user can choose to populate the gene pool for the next generation. (The term “parents” is used, but more than two can be selected.) This process can be repeated several times to create successive generations of cartoon faces.
Over 50 “genes” are incorporated into the faces (affecting everything from the dimensions of the head and other features to how asymmetrical the face is and whether the eyes follow your mouse or not) and the genes of the “parents” interact to produce the subsequent generation. You can also adjust the amount of mutation, which leads to a wider (or narrower) variety of offspring.
Another interesting feature is the ability to view genotypes. This allows you to view a graph under each offspring representing which genes come from which parent. You can also choose two faces and drag them to the Gene Exam Room to view to what degree each gene is represented in each face. This also allows you to see the effect of each individual gene. You can even increase or decrease the representation of each gene to see how it changes each face.
What can you (or your students) do with this simulation? Imagine the faces are puppies and you want to develop a new breed that is cute (or whatever other trait you’re interested in.) This simulation clearly demonstrates how breeders (of animals, plants, etc.) select for certain traits and refine them over generations.
Or imagine the choices you make in the simulation are not choices, but represent the effects of the environment. For example, say the Sun grows dim giving people with big eyes that can see in low light an advantage over people with small eyes. This advantage results in a higher percentage of offspring surviving and a wider representation in the gene pool. What effect would this have after several generations?
Think of how much richer students’ discussions of designer pets and natural disasters will be after they have “experienced” the process instead of just reading about it. In addition to genetics, this simulation can also stimulate interest in probability (how likely are offspring to have certain characteristics), design (ideas behind evolutionary design were the impetus for the interface), as well as all of the social issues behind decisions we are now able to make regarding genetics.
In terms of ESL teaching, I think giving students something interesting to do and then having them talk or write about it is a great way to get them to practice English. This genetics simulation is simple but interesting enough that it could generate lots of interesting ideas for students to talk about.
I heard a story on NPR the other morning that got me thinking about hackers. Not the type that break into computer systems to steal credit card numbers, but the kind that like to take existing technologies and repurpose them. If you’re a regular reader of this blog, you won’t be surprised to learn I consider myself to be a bit of a hacker by this latter definition.
Hackerspaces have opened up in cities across the U.S. and around the world. Think of these as clubs where like-minded people can share tools and expertise in order to collaborate as well as further their own projects. Here in Columbus, Ohio, we have the Idea Foundry. I haven’t been there yet, but the range of projects and classes on the website are intriguing.
So, what is the ESL equivalent? And, a related question is, could Language Labs serve the same purpose? I’ve taught in programs that do and don’t have language labs. And the current trend I’m seeing in our program is that almost every student brings a laptop from home or buys one when she gets here. Although I know this is a reflection of the demographics of our specific population and is certainly not the case for all ESL students, technology is becoming more and more prevalent. Could a distributed model of a language lab (i.e. each student has one computer, so the lab is wherever the students are) be a good model?
I’ve always been a big proponent of exploiting Course Management Systems (CMSs) that make it easy for teachers to post supplemental materials online for students to access. Taken a step further, materials could be made available in a way that students could access them and use them individually in a language-lab-like way. The difference would be that instead of a whole class marching to a lab to sit together for an hour, students could access “the lab” from the library, a coffee shop, or their own home. And the motivated ones could do so for more than the prescribed time.
Would this be better for students? I think it depends on what resources are made available to students and how they are instructed to use them. Finding some level-appropriate reading would be helpful. Working through an online workbook might also be useful. But do those options really allow a student to explore, be creative and become hackers with the language? Perhaps a bigger question is, have ESL resources really moved forward along with other advances in technology (internet compatibility, web 2.0, connecting users to other users)? Some of the resources I’ve posted on this blog have potential, but overall, I’m not sure that educational technologies have taken full advantage of these advances.
How would you design your own virtual language lab if each of your students had a computer? How would you create an environment in which students learn by exploring the language? Share your ideas in the comments below.
A long, long time ago (maybe 6 or 7 years now) I taught an elective ESL class centered around a student newspaper. We tried various formats including weekly, monthly, and quarterly editions, which ranged from 2 to 32 pages. We also experimented with various online editions, but at the time that mostly consisted of cutting and pasting the documents into HTML pages.
Fast-forward to 2011 and look how online publishing has changed. Blogs are ubiquitous, if not approaching passé. Everyone but my Mom has a Facebook page. (Don’t worry, my aunts fill her in). And many people get news, sports scores, Twitter posts, friends’ Facebook updates, and other information of interest pushed directly to their smartphones.
It’s no surprise, then, that a website like paper.li has found its niche. The slogan for paper.li is Create your newspaper. Today. Essentially, paper.li is an RSS aggregator in the form of a newspaper. RSS aggregators are nothing new (see iGoogle, My Yahoo!, etc.). As the name implies, the user selects a variety of different feeds from favorite blogs, people on Twitter, Facebook friends, etc. and aggregates the updates onto one page.
The twist with with paper.li is that the aggregated page looks very much like a newspaper — at least a newspaper’s website. For people not on Twitter, Facebook, and Tumblr, paper.li might feel much more comfortable. Also, publicizing one’s pages seems to be built right in to paper.li’s sourcecode. I say that because I first learned of paper.li when I read a tweet that said a new edition of that person’s paper was out featuring me. How flattering! Of course, I had to take a look.
Would paper.li be a good platform to relaunch a student newspaper? It might. If students have multiple blogs, paper.li could certainly aggregate the most recent posts into one convenient location. Other feeds could also be easily incorporated as well. (Think of this as akin to your local community newspaper printing stories from the Associated Press.) The most recent news stories about your city or region, updates from your institution’s website, and photos posted to Flickr tagged with your city or school name could each be a column in your paper.li paper right beside the articles crafted by the students themselves. You could even include updates from other paper.li papers.
To see examples of paper.li papers, visit the paper.li website. (And note that .li is the website suffix — no need to type .com no matter how automatically your fingers try to do so.) You can search paper.li for existing papers to see what is possible. A search for ESL, for example, brought up 5 pages of examples, some with hundreds of followers. Take a look. You might just get an idea for your own paper.li.
One of my favorite presentations at the 2011 Ohio University CALL Conference was made by Jeff Kuhn who presented a small research study he’d done using the above eye-tracking device that he put together himself.
If you’re not familiar with eye-tracking, it’s a technology that records what an person is looking at and for how long. In the example video below, which uses the technology to examine the use of a website, the path that the eyes take is represented by a line. A circle represents each time the eye pauses, with larger circles indicating longer pauses. This information can be viewed as a session map of all of the circles (0:45) and as a heat map of the areas of concentration (1:15).
This second video shows how this technology can be used in an academic context to study reading. Notice how the reader’s eyes do not move smoothly and that the pauses occur for different lengths of time.
Jeff’s study examined the noticing of errors. He tracked the eyes of four ESL students as they read passages with errors and found that they spent an extra 500 milliseconds on errors that they noticed. (Some learners are not ready to notice some errors. The participants in the study did not pause on those errors.)
The study was interesting, but the hardware Jeff built to do the study was completely captivating to me. He started by removing the infrared filter from a web cam and mounting it to a bike helmet using a piece of scrap metal, some rubber bands and zip ties. Then he made a couple of infrared LED arrays to shine infrared light towards the eyes being tracked. As that light is reflected by the eyes, it is picked up by the webcam, and translated into data by the free, open-source Ogama Gaze Tracker.
So, instead of acquiring access to a specialized eye-tracking station costing thousands of dollars, Jeff has built a similar device for a little over a hundred bucks, most of which went to the infrared LED arrays. With a handful of these devices deployed, almost anyone could gather a large volume of eye-tracking data quickly and cheaply.
Incidentally, if you are thinking that there are a few similarities between this project and the wii-based interactive whiteboard, a personal favorite, there are several: Both cut the price of hardware by a factor of at least ten and probably closer to one hundred, both use free open-source software, both use infrared LEDs (though this point is mostly a coincidence), both have ties to gaming (the interactive whiteboard is based on a Nintendo controller; eye-tracking software is being used and refined by gamers to select targets in first-person shooters), and both are excellent examples of the ethos of edupunk, which embraces a DIY approach to education.
Do you know of other interesting edupunk projects? Leave a comment.
#edtech #esl YouTube annotations provide a discussion space layered onto each video.
In my previous post, Interactive Videos, I shared some examples of YouTube videos that incorporate some new interactive features of the site that overlay buttons and links that can take you to a different segment of the video or to a different video or website entirely.
These kinds of pop-up messages have been crowding onto YouTube videos since this feature became available. If used gratuitously, they are annoying, but when used to add supplemental information, they can be quite useful. As one example, take a look at the video tutorial for making the above image. It’s a straightforward and informative two-minute video. At about the 1:30 mark, some red text appears that seems to be essential information that was omitted in the original shooting of the video. Adding a quick note is a simple solution that does not require reshooting the video.
But there must be more we can do with these tools. I’d been thinking about some different ways to incorporate these techniques when I came across a presentation made by Craig Howard at the Indiana University Foreign / Second Language Share Fair. The page includes a recording of the presentation, a handout that summarizes how to annotate YouTube videos, and a link to an example video, which I’ve included below.
The nice thing about this approach is that a video, in this case a video for teachers-in-training to discuss, can include the online conversation layered right over top of the video. Comments by different speakers can be made in different colors and the length of time they are displayed can easily be adjusted as appropriate. Of course, everyone involved needs to have free Google or Gmail accounts to sign in, and the video must be configured to allow annotations by people other than the person who uploaded it.
The ability to integrate video materials and online discussion so seamlessly opens up some interesting potential for interacting with videos in new and interesting ways. I’ve recently looked at some options for online bulletin boards / sticky notes, including Google Docs, but incorporating this style of discussion directly onto the video is fantastic.
I’m still kicking around different options for making YouTube videos more interactive. If you have other examples or ideas, please share them in the comments below.
When I hear the phrase interactive videos, I think of people covered in florescent mocap pingpong balls or choppy, Choose Your Own Adventure-style stories like Dragon’s Lair. And there are those. But, it seems that some creative tinkerers have pushed the envelope with some of YouTube’s interactive features and come up with some interesting results.
How can they be used with ESL and EFL students? Well, in addition to viewing and interacting with the videos and then discussing or reporting on the experience, students could be challenged to determine how the videos were made. For the more ambitious, students could make their own videos using the same techniques. Some of them, like the Oscars find the difference photo challenge would be relatively easy to remake.
Most schools and classrooms have bulletin boards, but what is the online digital equivalent? If you are using a course management system, there are lots of tools built-in that approximate this experience. But if not, there are various options that offer lots of options for interaction between users.
They can be used asynchronously so that people can leave messages anytime and the conversation happens over a long period of time. They could also be used in real time so that users can interact in a very visual environment. Messages can be various sizes, color-coded, and dragged around so they can be grouped together in various ways.
One online bulletin board is Wallwisher.com, which allows a user to create a wall to which other users can add “sticky notes.” It’s quick and easy to use, but unfortunately it appears to be a victim of it’s own success — in my recent experience the site is not loading quickly, possibly due to being overwhelmed by a large volume of users. If these issues can be worked out, Wallwisher will be a very useful tool.
A very similar tool is Stixy, which allows sticky notes and other items (photos, documents, and dated to-do list items) to be posted on the wall. Clicking on an item opens a menu with lots of options for color, font, as well as placement (in the front or in the back, relative to the other notes). You can also lock certain notes so that instructions or introductions, for example, can’t be moved around like the rest of the notes. And the site doesn’t seem to have any problems loading due to demand. Yet.
This site also allows the creation of sticky notes, including very small word-sized stickies, which could work very well on an interactive whiteboard as a way to make fridge-magnet-poetry dragable words.
In addition to the sticky-specific applications above, it’s worth noting that documents created in Google Docs can be configured to be edited by a group of people. Create a new document and use different colored boxes in place of stickies and the same effect can be achieved.
In a recent meeting with the executive council of our student association, one of our class representatives suggested organizing a canoe trip. Judging by the puzzled looks around the boardroom table, many students did not recognize this word. So, I pulled up Google Images and did a search for canoe. The results were similar to what you see above. Instantly, students could understand the word and the discussion could continue.
I really enjoy the challenge of working with a group of students with a wide range of ability. Using Google Image search is a good way to help level the playing field so that students can communicate with each other more efficiently. If you have a projector and internet access in your classroom, images can be pulled up very quickly as a teaching aid.
A word of caution, though. Be sure to set the Safe Search setting to “Use strict filtering” if you are doing a search in front of a whole class in order to reduce the chance of objectionable images appearing. And be aware that even strict filtering is not 100% perfect. So, if you are working with a group that is young or particularly sensitive to certain images, be ready to hit the back button immediately or, better yet, mute the image on the projector until the search comes up, preview the images, and then make the projection available to the class.
Once you begin using it, Google Image search is the kind of simple tool that you will wonder how you lived without. While there are certainly benefits to having students define unknown terminology for each other, there are also times when you just want to provide a few words to define a term and move on. In these cases, an image search is worth a thousand words.