#edtech #esl YouTube annotations provide a discussion space layered onto each video.
In my previous post, Interactive Videos, I shared some examples of YouTube videos that incorporate some new interactive features of the site that overlay buttons and links that can take you to a different segment of the video or to a different video or website entirely.
These kinds of pop-up messages have been crowding onto YouTube videos since this feature became available. If used gratuitously, they are annoying, but when used to add supplemental information, they can be quite useful. As one example, take a look at the video tutorial for making the above image. It’s a straightforward and informative two-minute video. At about the 1:30 mark, some red text appears that seems to be essential information that was omitted in the original shooting of the video. Adding a quick note is a simple solution that does not require reshooting the video.
But there must be more we can do with these tools. I’d been thinking about some different ways to incorporate these techniques when I came across a presentation made by Craig Howard at the Indiana University Foreign / Second Language Share Fair. The page includes a recording of the presentation, a handout that summarizes how to annotate YouTube videos, and a link to an example video, which I’ve included below.
The nice thing about this approach is that a video, in this case a video for teachers-in-training to discuss, can include the online conversation layered right over top of the video. Comments by different speakers can be made in different colors and the length of time they are displayed can easily be adjusted as appropriate. Of course, everyone involved needs to have free Google or Gmail accounts to sign in, and the video must be configured to allow annotations by people other than the person who uploaded it.
The ability to integrate video materials and online discussion so seamlessly opens up some interesting potential for interacting with videos in new and interesting ways. I’ve recently looked at some options for online bulletin boards / sticky notes, including Google Docs, but incorporating this style of discussion directly onto the video is fantastic.
I’m still kicking around different options for making YouTube videos more interactive. If you have other examples or ideas, please share them in the comments below.
I painted my living room and dining room over the winter break. Both rooms required two coats. Typically, I listen to audiobooks or This American Life podcasts while I do this kind of work. I noticed that as I applied the second coat, I could remember vividly what I had been listening to the last time I had painted each part of the wall. The Pulse Smartpen from Livescribe creates a similar effect for much better results — and no fumes!
The Pulse Smartpen from Livescribe is amazing.
When using the Smartpen, you can record audio and / or the movements of the pen. The audio can be played back with the pen or uploaded to your computer (where you’ll also find .pdf versions of the notes you wrote!) When you use the Smartpen on Livescribe’s special dot paper, it indexes the audio you record to what you were writing when you recorded it.
This is a boon when you go to review your notes. The same way I recalled each of the 20 acts in 60 minutes while rolling on the paint around each faceplate, a student can hear what her professor was saying while she was taking each line of notes. She can literally click on a line of notes and the pen will play back that part of the audio. When demonstrated, it almost seems like magic.
Some of the practical uses of this technology are obvious. John, the undergraduate student who demostrated his Smartpen to me in the Digital Union, said he found it most helpful for taking notes in chemistry, where he was often too busy drawing chemical structures to also note everything the professor said. (Incidentally, John bought his $200 Smartpen the day after he had to return his demo model.) Would it work for ESL students? Perhaps. Obviously, the combination of input types can really help different learning styles, and being able to review notes in more than one media would be an advantage.
I’ve also been thinking about how this technology relates to my earlier post on captioning digital audio. With both technologies, they key is indexing the graphic with the auditory because the former is easily searchable. Once Livescribe adds some kind of optical character recognition, which would make the notes more easily editable, and once searchable captions become a standard component of digital video, we will have finally integrated all of this information in a truly useful way. Can the Singularity be far behind?