I caught the tail end of the monthly Exploring Learning Technologies community meeting recently and became intrigued by the topic: accessibility. (Full disclosure: I’m part of the committee that plans these meetings.)
This is an important topic because, as more and more educational video is put online for class use (lectures, for example) accessibility becomes a greater issue. The more I heard, the more I thought about how processes like captioning video can be helpful with second language learners as well as people with hearing impairments. So, in general, the conclusion was, it’s good to make captioning part of your practice if you post video online. The most difficult part of this process, obviously, is transcribing the text.
There are several ways to do this, but, in general, you will need to pay someone to listen and type. Whether you hire someone yourself or use an online service, the cost of both kinds of service increase with the accuracy and rapidity of the completion of the transcription.
An interesting alternative incorporates Dragon speech recognition software. Unfortunately, you can’t just have this software listen to the video and produce a transcript. Background noise and other issues make this impossible. But you can have someone watch the video and repeat the transcript for the software. In effect, this someone becomes a biological interface between two digital entities! For a moment, I was distracted by images from the Matrix movies, in which machines use humans disposably, but then I started to realize the most useful feature of captioned video: searchability.
If you want to find a phrase in your favorite movie, you likely have to guess where it is and then skip forwards and / or backwards until you find it. This is difficult. Now imagine looking for the same phrase in a movie you have never seen. Or, searching a dozen movies. By searching a text transcript which is linked to the timeline of the movie, it would be extremely easy to find the phrase. Looking for “classroom technology?” The phrase is used at 03:58 and again at 17:22.
This process is costly and labor-intensive now, but eventually, whether speech recognition software is able to scan video and automatically transcribe speech accurately, or there is an offshore matrix of borg-like transcribers scouring YouTube, all video will be transcribed in a searchable way. This will make video useful and accessible in the same way that the Internet has made texts useful and accessible. And we’ll look back and say, why did we wait so long to do this?
4 responses to “Searchable Video – Enter the Dragon”
> I caught the tail end of the monthly Exploring
> Learning Technologies community meeting
> recently and became intrigued by the topic:
> accessibility. (Full disclosure: I’m part of the
> committee that plans these meetings.)
Could you expand more about this community? This topic is very interesting (see http://blog.overstream.net/tag/accessibility/).
The Exploring Learning Technologies community is a loosely organized group of people who teach and work at Ohio State. We’ve got a wiki, but it hasn’t really been well used or kept up to date (or even made public, I think). Ken Petri from OSU’s Web Accessibility Center gave the presentation and would make a great resource, if you’d like to find out more.
Pingback: Captioning Digital Video « ESL Technology
Pingback: Autocaptioned YouTube Videos « ESL Technology