How Earth Made Us is a documentary series produced by the BBC. Like many BBC programs, the cinematography is spectacular. But, perhaps more interesting, is the approach the program takes to history. Instead of only examining human interactions, the program focuses on how natural forces such as geology, geography, and climate have shaped history. And, the whole series is available on YouTube.
In the first episode, Water, host Iain Stewart explores the effects that extreme conditions have had on human development. He visits the Sahara Desert, which receives less than a centimeter of rainfall each year, and Tonlé Sap, which swells to become the largest freshwater lake in southeast Asia during monsoon season. The contrast is striking. One interesting factoid is that the world’s reservoirs now hold 10,000 cubic kilometers of water (2400 cubic miles). Because most of these reservoirs are in the northern hemisphere, they have actually affected the earth’s rotation very slightly.
The second episode, Deep Earth, begins in a stunning crystal cave in Mexico, in which crystals have grown to several meters long. The cave, which is five kilometers below the earth’s surface, was discovered by accident when miners broke into it. I can’t imagine what they thought when they first set foot inside.
The third episode, Wind, explores the tradewinds which spread trade and colonization, which lead to the beginning of globalization. This brought fortune to some who exploited resources and tragedy to others who were enslaved. The view from the doorway through which thousands of Africans passed on their way to the Americas is a chilling reminder of this period of history.
Fire, the fourth episode, moves from cultures that held the flame as sacred, to the role of carbon in everything from plants to diamonds to flames. And carbon is also the basis of petroleum, which has powered the growth of humankind. Several methods of extracting crude oil around the world are explored.
The final episode, Human Planet, turns the equation around tying the first four episodes together by looking at how humans have had an impact on the earth. One of the most compelling examples is the Great Pacific Garbage Patch which is the result of ocean currents bringing plastic and other debris from countries around the Pacific rim. This garbage collects, is broken down by the sun, and eventually settles to the bottom to become part of the earth’s crust. This is juxtaposed to rock strata in the Grand Canyon, pointing out that eventually, one layer of rock under the garbage patch in the Pacific will be made up of this debris.
In all, there is almost 5 hours of documentary video here. It is a compelling production with spectacular imagery. There are any number of ways to use these videos with an ESL class. And because they are available on YouTube, there are even more options available to an ESL instructor. Instead of everyone watching together in the classroom, the videos can be posted in an online content management system and students can watch them anywhere, anytime on their laptops and smartphones, if they have access to that kind of technology. And if the videos are being watched outside of the classroom, there are more options for assigning different groups of students to watch different videos and then have conversations with students who watched different episodes. The ubiquity of online video can bring learning to students outside of the classroom.
#edtech #esl YouTube annotations provide a discussion space layered onto each video.
In my previous post, Interactive Videos, I shared some examples of YouTube videos that incorporate some new interactive features of the site that overlay buttons and links that can take you to a different segment of the video or to a different video or website entirely.
These kinds of pop-up messages have been crowding onto YouTube videos since this feature became available. If used gratuitously, they are annoying, but when used to add supplemental information, they can be quite useful. As one example, take a look at the video tutorial for making the above image. It’s a straightforward and informative two-minute video. At about the 1:30 mark, some red text appears that seems to be essential information that was omitted in the original shooting of the video. Adding a quick note is a simple solution that does not require reshooting the video.
But there must be more we can do with these tools. I’d been thinking about some different ways to incorporate these techniques when I came across a presentation made by Craig Howard at the Indiana University Foreign / Second Language Share Fair. The page includes a recording of the presentation, a handout that summarizes how to annotate YouTube videos, and a link to an example video, which I’ve included below.
The nice thing about this approach is that a video, in this case a video for teachers-in-training to discuss, can include the online conversation layered right over top of the video. Comments by different speakers can be made in different colors and the length of time they are displayed can easily be adjusted as appropriate. Of course, everyone involved needs to have free Google or Gmail accounts to sign in, and the video must be configured to allow annotations by people other than the person who uploaded it.
The ability to integrate video materials and online discussion so seamlessly opens up some interesting potential for interacting with videos in new and interesting ways. I’ve recently looked at some options for online bulletin boards / sticky notes, including Google Docs, but incorporating this style of discussion directly onto the video is fantastic.
I’m still kicking around different options for making YouTube videos more interactive. If you have other examples or ideas, please share them in the comments below.
Ever since a $3000 bounty was placed on cracking open Microsoft’s fab new gaming hardware, the motion-sensing Kinect for Xbox, hackers and tinkerers have been putting the open-source drivers to lots of interesting uses on platforms that Microsoft never envisioned. I’ve written about interesting Kinect hacks before (and before that,) and I’ve written about my experience with the Wii-based $50 Interactive Whiteboard (IWB,) but I haven’t seen a fully-developed Kinect-based Interactive Whiteboard.
Perhaps an Interactive Whiteboard is too narrow a description. Many of the pieces are in place (see below) to interface with a computer using Kinect. So, as with the Wii-based IWB, any application you can use on your computer can be controlled by this hardware. If you connect your computer to a projector, you essentially have an Interactive Whiteboard.
Is the Kinect-based experience different from a Wii-based IWB or a Smartboard? Almost certainly. There would be no need to touch the screen at all, but rather to gesture in front of the Kinect to interact with the projection on the screen. Would this be an improvement? I’m not sure. A touch-based IWB is more analogous to traditional whiteboard that uses markers and an eraser. So, the touchless experience would be quite different. I need to try it myself to really wrap my head around the opportunities that this motion-sensing interface offers.
I’m not sure if anyone here at Ohio State is working with Kinect as an interface for non-Xbox applications. But I do know that the Digital Union has a Kinect which could probably be used to see if and how things work. If anyone else is interested in trying to pull this together, drop me a line or leave a comment.
First is that group work allows students an opportunity to make a difficult decision based on a set of data. An analogy is drawn to a jury which must decide the outcome of a court case based on evidence in a trial. The article argues for posing the same, significant problem to each group and having them report their specific choice simultaneously.
Reporting simultaneously, whether by holding up cards with letters on them, pointing or moving to a wall or area of the classroom, or using clickers, prevents later groups from changing their minds based on previous groups’ answers.
The article goes into much more detail, and is worth a read. How could these ideas be used in an ESL or EFL classroom?
Groups of students could be asked to evaluate a piece of writing and report back on their evaluations. If you received this job application, would you hire the person? Based on the mistakes in this paragraph, which country is the author from? What letter grade should this essay receive? Projects like these could be very engaging ways for students to interact with the target language and each other.
A few days ago, I wrote about how the new Microsoft Kinect has been hacked so that you don’t need an Xbox to use it. There are now lots of tinkerers and hackers working with this hardware to see what else might be possible. Although it’s not as easy to see the immediate applications for Kinect in the language classroom as it was for the Wii-based interactive whiteboard, there are obvious parallels. And this new gaming hardware is more advanced than the Wiimote, which may offer more possibilities. I’ve posted some examples of some interesting Kinect-based projects below.
How does it work?
Infrared beams, and lots of them. Here’s how it looks with an infrared / nightvision camera.
Because Kinect can “see” surfaces in 3D, it can be used to create a multitouch interactive whiteboard on multiple surfaces.
Control your browser
Forget your mouse. Kinect can see the gestures you make in three-dimensional space. Use gestures to control your browser and more.
Teach it to recognize objects. Obviously, there is a lot more software in use here, but Kinect provides the interface.
Who wouldn’t want one of these?
In 1987, the movie Predator cost $18M. A significant portion of what was left over after paying Arnold Schwarzenegger was likely spent on the cool alien light-bending camouflage effects. Just over 20 years later, you can make the same effects on your computer using the $250 Kinect hardware.
At first glance, this looks like really poor quality video, but stick with it. Notice the Kinect camera does not move, but with the flick of a mouse, the point of view can be changed as Kinect extrapolates where everything is in the space based on what it can see from where it is. The black shadows are where Kinect can’t see.
Using 2 Kinects, most of the shadows are filled in. The effect is like a translation of the real world into a low resolution Second Life-like environment.
The first course I taught online was in a TEFL Certificate program in 2003 or 2004. The learning curve for me was steep. But, the more I taught online, the more I learned: discussions have to be required or they just won’t happen, scheduling needs to be clear because interaction might occur asynchronously and literally 24 hours per day, students might (incorrectly) expect their instructor to be available around the clock, and technical problems have the potential to be extremely disruptive.
Now, years later, as online and distance education classes have become so much more common and as management systems (CMSs) and personal learning environments (PLEs) have become integrated into most college classes that meet face-to-face, I have been searching for a collection of best practices for online and hybrid classes.
I started by asking folks at the Digital Union at Ohio State for some guidance. Rob and Joni suggested I look into Quality Matters (QM), an organization dedicated to promoting and improving the quality of online education. (In fact, Joni discusses QM in much more detail in a post on the Digital Union blog.)
One of the most beneficial things that Quality Matters has done is to develop a rubric for evaluating online courses. Our ESL program does not have any classes that are completely online, however as we offer more and more content online, the rubric can serve as a good guide for implementing our CMS components effectively.
I should also add that, in addition to the publishing the rubric and references to the research it is based on, Quality Matters also uses the rubric as the basis for a peer-review process for online courses as well as professional development and training in effective online course design. To pass a QM review, an online course must include all of the essential 3-point standards and achieve an overall score of 72 points or more. In fact, the rubric contains several points that I would argue are important in traditional classroom based courses as well (i.e. 1.5 – Students are asked to introduce themselves to the class.)
I’m not sure what other guidelines are out there (if you do, please leave a comment) but Quality Matters seems to be a good foundation for evaluating online courses and course components.
In very robust commercial games, it can be fun to lose. In fact, sometimes half the fun is trying to break the game just to see what happens. In racing games, it can be fun to try to crash spectacularly. In a first person shooter, it might be fun to shoot your teammates or other good guys, just to see what happens. Sim City has natural disasters that players can trigger because sometimes destroying a city simulation is just as important as building one.
Because making mistakes is a part of learning, it stands to reason that educational and language learning games should be fun to lose at. But are they? I can’t think of many examples in which losing is fun. Simple drill and kill games or hangman variants certainly are not (unless you really like to see the hangman hang, and eventually we all do.)
It’s important to account for this kind of curiosity when designing a game, because players will (presumably) lose, or at least make mistakes, quite often. If this experience is an enjoyable part of the process, they will want to try again, which is the whole point, isn’t it? Play the game more to keep learning.
This is taken into account in Blank Or Blank, a concordancer game I’m working on that I’ve written about before. By giving students control of the search terms and the corpus in which the terms are searched, students can try searching for two grammatically related terms (like go and goes) but also try something fun (such as kill and love) to try to break the game or at least use it in a new way.
In fact, if we view language learning itself as a game, or at least a puzzle, we can clearly see that students will lose (or make mistakes) far more often than they win (communicate comprehensibly), especially in the very beginning. Later in their learning, as winning is redefined (as making no mistakes or as speaking with a minimal accent, for example), they will still lose occasionally.
One of the promises of computer mediated communication for language learning is the low-stakes, face-saving nature of communicating via a computer instead of face-to-face with a real, live person. If the process of losing / making mistakes can be made fun and interesting, the game will be more fun and more useful because students will be more likely to learn more by playing the game repeatedly.
Are there many learning games where it’s fun to se un perdedor?
I’ve been thinking about digital games for language learning quite a bit lately and a number of questions have come up, the biggest of which is: Why are so many educational games so lame? I love the idea of learning through play, but many educational games fail to move past drill-and-kill exercises. When you compare this to commercially available immersive games like World of Warcraft or Grand Theft Auto, there is a remarkable gap.
For a while, I thought Second Life held some potential because that virtual environment could be designed and built specifically for a given topic. But building in Second Life (at least to me) proved to be extremely time-intensive and I didn’t feel like the results were worth the energy I had to invest.
The notion of augmented reality has also been floating around in my subconscious for a while, but it never really stuck; it’s really cool, but how could I work with it? All of these things coalesced for me today after sitting through a couple of presentations at CALICO.
Julie Sykes, who developed an immersive gaming environment focused on Spanish pragmatics called Croquelandia, has been working on a mobile place-based murder / mystery game for learning Spanish in an historic neighborhood near the University of New Mexico campus. The iPod / iPhone-based game, called Mentira, is built on the ARIS platform, which makes it very easy to cut and paste text and other media files into a branching story line to create the game. To progress through the story, students have to input clues from the real environment (the street address of the old church, for example) to unlock parts of the story. (An alternative would be to use GPS to unlock the story when students actually visited the location, but this would require iPhones and exclude iPod Touches.)
I was most amazed by the forehead-slappingly simple concept that we don’t need to create a virtual world for students to interact with because there is a pretty robust world right outside the classroom for them to interact with. And finding a target language-rich environment is even easier if the target language is English (at least for me).
It’s soon to be a cliche (if it isn’t already) but being able to take a computer into the real world so easily is going to be a game changer. Think of botany students looking up plants on their smartphones. It’s been said that there are no more arguments about baseball statistics in sportsbars because it’s too easy to get the answers to that information. Information is literally at our finger tips. But I digress.
The user experience within a place-based game like Mentira, if well designed, can compete with big commercial games because it can be specifically tailored right down to the details of a given neighborhood. Instead of taking time to create dazzling multi-media experiences, educators can really focus on the content. And, being text-based, lowers the barrier even further. Julie reported that her students were eager to contribute to the story and some had plans to use ARIS to create their own games. Enabling students to become game-producers, not just players — in their target language — is astounding to me.
I’m not sure that a game that sends students into the real world will be able to lower their affective filters or allow them to have multiple repeat experiences if they want to practice in the same way as a relatively low-risk virtual environment might. But a game could be designed to be played several times with different outcomes. There is also a potential risk in sending students out into the world, depending on where they are sent (clearly this is not the time to recreate Grand Theft Auto) but the risk could certainly be minimized. It’s also important to respect the real residents of the real world into which students are sent. Having them congregate on someone’s front lawn to solve a mystery likely would not be appreciated. Julie reported that some residents were eager to talk about their neighborhood with her students and even seemed flattered that their neighborhood was chosen. This is the ideal to strive for.
Unfortunately, ARIS just updated it’s app and as of today there are only four ARIS games available. Several others, including Mentira were built on a previous version which means it will take some work to get the game moved onto the new platform. I will update this post if / when it becomes available. In the meantime, we have to make due with this trailer which can be downloaded from the ARIS Games website. The trailer serves as the introduction to the game and does a nice job setting the tone for the game. Unfortunately, it just makes me want to play the game even more.
Is this assessment? Click the panel to see the rest of the panel.
I came across this Baby Blues comic one Sunday morning last fall and it made me think about how we assess our students. The first panel, above, shows a teacher asking her class a pretty typical question: Does everyone understand this chapter? And the class gives an emphatic YES! in response. This is great, right? Click the link or on the panel to read the whole thing. Go ahead. I can wait.
Did you read it? It didn’t turn out the way we expect when we ask this question as teachers. I think this comic struck a nerve with me because I have looked out at classroom-fulls of students and seen blank facial expressions that can be difficult to interpret. Is a student totally lost, unsure of how to connect new information to old? Or are we moving too slowly causing the student to become bored and tune out? The same blank stare can hide either reaction to my teaching. And, as this comic points out, our first reaction, asking if everyone understands, may not clarify the situation.
There are technical innovations such as clickers that might allow students to provide honest, anonymous feedback. (I envision students turning dials as if watching candidates making election speeches and causing a pointer to draw a line somewhere between “I get it – teach faster” and “I don’t get it – slow down”.) While this might be valuable, and maybe even accurate, feedback, I’m not sure it’s a practical solution. But it would be nice to know if the instincts we use to pace our teaching are accurate, at least in the students’ opinion.
Sing, floss, stretch. But trust me on the sunscreen.
I wrote recently about the elective class that I am developing and teaching on popular music. I’m covering a decade per week and a song per day. Within each song, I highlight an interesting grammar, vocabulary, or pronunciation point.
Developing this class has meant combing through many online resources including lists of Billboard number one hit songs on Wikipedia and best-of-the-decade lists such as AOL’s radio blog, which is a good place to start because you can listen to most of the songs on the list. I’ve also found that the website sing365.com tends to have the least errors of all of the lyrics websites that are returned in Google searches.
I intend to post the list of songs I’ve used at the end of the quarter (I might even link to the Google Docs spreadsheet that I used to record all of the songs I considered for each decade) but for now I thought I would post the following music video, which I plan to use tomorrow, the last day before Thanksgiving break.
The song is actually a spoken word piece which has an interesting story. While not a traditional pop music video, I think the message is inspirational without being cheesy. Plus, there are lots and lots of examples of advice using the imperative. It might not get you through the last two weeks of the quarter, but it doesn’t hurt.