I’ve been tinkering with AntConc, Laurence Anthony’s free concordancer, which has led me down a bit of a rabbit hole of lists generated by corpus linguists over the past 60 years. I’ve listed a few that I’ve used, sometimes within AntConc, to analyze students’ writing. If you’ve taught students to investigate their linguistic hunches via the Corpus of Contemporary American English (COCA), you might also consider teaching them to put their own writing into a tool like AntConc to analyze their own writing as well. By including the lists below a blacklist (do not show) or a whitelist (show only these), students can hone in on a more specific part of their vocabulary. Most of these lists are available for download, which means you can be up and running with your own analysis very quickly.
The lists (in chronological order):
General Service List (GSL) – developed by Michael West in 1953; based on a 2.5 million word corpus. (Can you imagine doing corpus linguistics in 1953? Much of it must have been by hand, which is mind boggling.) Despite criticism that it is out of date (words such as plastic and television are not included, for example), this pioneering list still provides about 80% coverage of English.
Academic Word List (AWL) – developed by Averil Coxhead in 2000; 570 words (word families) selected from a purpose-built academic corpus with the 2000 most frequent GSL words removed; organized into 9 lists of 60 and one of 30, sorted by frequency. Scores of textbooks have been written based on these lists, and for good reason. In fact, we have found that students are so familiar with these materials, they test disproportionately highly on these words versus other advanced vocabulary.
Academic Vocabulary List (AVL) – the 3000 most frequent words in the 120 million words in the academic portion of the 440 million word Corpus of Contemporary American English (COCA). This word list includes groupings by word families, definitions, and an online interface for browsing or uploading texts to be analyzed according to the list.
New General Service List (NGSL) – developed by Charles Browne, Brent Culligan, and Joseph Phillips in 2013; based on the two-billion-word Cambridge English Corpus (CEC); 2368 words that cover 90.34% of the CEC.
New Academic Word List (NAWL) – based on three components: the CEC Academic Corpus; two oral corpora, the Michigan Corpus of Academic Spoken English (MICASE) and the British Academic Spoken English (BASE) corpus; and on a corpus of published textbooks for a total of 288 million words. The NAWL is to the NGSL what the AWL is to the GSL in that it contains the 964 most frequent words in the academic corpus after the NGSL words have been removed.
At the inaugural Playful Learning Summit at Ohio University, I shared a couple of games that I developed for use with ESL students at Ohio State. These are both paper-based games, which stood out in a room full of computer games and an Oculus Rift connected to a Kinect. This last project — an immersive, gesture-controlled, virtual reality interface — was really cool, but isn’t something I know how to develop (yet). But, fortunately, everyone gets paper. I hope these two games serve as an inspiration for anyone who doesn’t think she can design a game for her students.
Football Simulation – I’ve posted about this one before, but it still stands as an easy-to-prepare, easy-to-play simulation that can help international students to understand the game of American football. The focus, when I use the game in the classroom, is to understand what down and distance are as well as the importance of basic offensive and defensive strategies. All that is required is one six-sided die and a printout of the document with the offense and defense cards cut out.
Orientation to Campus Game – This is a board game I developed based on the Madeline board game. Players travel around the campus map / board uncovering tokens when they land next to them. If the player uncovers one of the 5 buckeye symbols, she keeps it. If the player uncovers the name of a building, she must move to that space immediately. The best things about this game are that it is very easy to play and that students really focus and pay attention to the most important buildings on the map. There are no dice and you can use almost anything for player tokens. I also really like the mechanic of moving to the place listed on the token because this changes every time the game is played. On the down side, it is a kids game, so it doesn’t hold adults’ attention for very long. And if the students have been on campus for even a couple of weeks, they are already familiar with most of the buildings in the game. Still, this game could be useful for students to play while waiting for our orientation program to start because it might help them to discover buildings that they do not yet know.
So, don’t be afraid of developing games on paper if, like me, you don’t have a wide array of programming skills. Any game that is prototyped and play-tested on paper could later be converted to a computer version. But, by working out the kinks on paper, you can develop your game to its final version without even picking up your keyboard.
“Ever notice how the Fishheads song sounds kind of like Phantom of the Opera?” Um, no, I hadn’t, but when Greg, whose desk is not far from mine asked me this question, I became curious. I hummed each tune and I had to admit there were some similarities.
I opened a YouTube clip of each song (Fishheads and Music of the Night from Phantom of the Opera) and after a quick listen, I also thought there were some parallels that warranted further investigation. I also found that I could play the Fishheads song, then switch over to another tab in my browser and watch the Phantom clip on mute, thus integrating the audio from one with the video from another. Interesting. And, when the Phantom’s mouth movements happened to sync to the Fishheads lyrics, pretty funny. But, could I capture this hilarity for others to enjoy? Enter Screenr.
Screenr is an online screen capture service that I’d seen but never used before. Turns out it couldn’t be easier. Go to Screenr.com, click on the “Launch Screen Recorder Now!” button, and drag the red rectangle over the part of your screen that you want to record. From there, just click record for up to 5 minutes of free video. It was a bit tricky for me to sync the start of the video and audio to the start of the recording, and I had to adjust the volume level so that the recording was not too loud, but after a couple of tries I managed to work it out pretty well. See for yourself.
Unfortunately, WordPress, which this blog is built in, does not currently provide a way to embed Screenr content. I should also be able to upload the video to YouTube from Screenr, but that feature isn’t working for me. So, instead of embedding the video in my blog, you’ll just have to follow the link.
So, that’s how I turned an office distraction (no offense, Greg!) into an opportunity to try out a technology I’ve been meaning to check out. And, I’m pleased to report, Screenr was extremely easy to use on the fly without practice or instructions.
Can this become a project for your classroom? Perhaps. It might be very interesting to ask students to create a Screenr video that combines the audio from one video and the video from another. Because Screenr is web / browser based, there are very few editing options other than “record” and “stop,” but this simplicity can really flatten out the learning curve. It would be interesting to have students present and discuss their mashups. But, please, no Wizard of Oz vs. Pink Floyd.
Above is a plot of students’ attendance versus their grade point averages (GPAs). See any trends? Obviously, students with higher attendance tend to have higher GPAs. While this is not particularly surprising, it’s nice to be able to support this notion with actual data.
(I should say that this “actual data” is not actual data, but it is based on actual data. I’ve taken the actual “actual data” and randomly added or subtracted up to 5% so that the general trends remain, but none of the actual data points are the same, except by chance.)
In addition to the general trend that GPAs correlate positively with attendance, I can say that no student who had 100% attendance got less than a C+ (2.85 GPA) and that no student who got a 4.0 GPA (straight As) attended less than 96% (at least in the “actual” data).
Can I claim causality? Not exactly. I don’t know that higher attendance causes higher grades, or vice versa, but I think it could be claimed that low attendance causes low grades — if you aren’t in class, you can’t get an A.
Admittedly, this isn’t the most cutting edge visualization — it’s just a graph I made using Microsoft Excel — but I think it represents a relatively simple set of data effectively.
I plan to show this graph to all of our students at our program-wide meeting at the beginning of the academic year. If nothing else, it should get them thinking a bit about the importance of attending class if they want to be successful. This isn’t a big issue for most of our students but, as you can see, it is an issue for some. And if it helps them to have me connect the dots, I gladly will (see below, click to enlarge).
A long, long time ago (maybe 6 or 7 years now) I taught an elective ESL class centered around a student newspaper. We tried various formats including weekly, monthly, and quarterly editions, which ranged from 2 to 32 pages. We also experimented with various online editions, but at the time that mostly consisted of cutting and pasting the documents into HTML pages.
Fast-forward to 2011 and look how online publishing has changed. Blogs are ubiquitous, if not approaching passé. Everyone but my Mom has a Facebook page. (Don’t worry, my aunts fill her in). And many people get news, sports scores, Twitter posts, friends’ Facebook updates, and other information of interest pushed directly to their smartphones.
It’s no surprise, then, that a website like paper.li has found its niche. The slogan for paper.li is Create your newspaper. Today. Essentially, paper.li is an RSS aggregator in the form of a newspaper. RSS aggregators are nothing new (see iGoogle, My Yahoo!, etc.). As the name implies, the user selects a variety of different feeds from favorite blogs, people on Twitter, Facebook friends, etc. and aggregates the updates onto one page.
The twist with with paper.li is that the aggregated page looks very much like a newspaper — at least a newspaper’s website. For people not on Twitter, Facebook, and Tumblr, paper.li might feel much more comfortable. Also, publicizing one’s pages seems to be built right in to paper.li’s sourcecode. I say that because I first learned of paper.li when I read a tweet that said a new edition of that person’s paper was out featuring me. How flattering! Of course, I had to take a look.
Would paper.li be a good platform to relaunch a student newspaper? It might. If students have multiple blogs, paper.li could certainly aggregate the most recent posts into one convenient location. Other feeds could also be easily incorporated as well. (Think of this as akin to your local community newspaper printing stories from the Associated Press.) The most recent news stories about your city or region, updates from your institution’s website, and photos posted to Flickr tagged with your city or school name could each be a column in your paper.li paper right beside the articles crafted by the students themselves. You could even include updates from other paper.li papers.
To see examples of paper.li papers, visit the paper.li website. (And note that .li is the website suffix — no need to type .com no matter how automatically your fingers try to do so.) You can search paper.li for existing papers to see what is possible. A search for ESL, for example, brought up 5 pages of examples, some with hundreds of followers. Take a look. You might just get an idea for your own paper.li.
One of my favorite presentations at the 2011 Ohio University CALL Conference was made by Jeff Kuhn who presented a small research study he’d done using the above eye-tracking device that he put together himself.
If you’re not familiar with eye-tracking, it’s a technology that records what an person is looking at and for how long. In the example video below, which uses the technology to examine the use of a website, the path that the eyes take is represented by a line. A circle represents each time the eye pauses, with larger circles indicating longer pauses. This information can be viewed as a session map of all of the circles (0:45) and as a heat map of the areas of concentration (1:15).
This second video shows how this technology can be used in an academic context to study reading. Notice how the reader’s eyes do not move smoothly and that the pauses occur for different lengths of time.
Jeff’s study examined the noticing of errors. He tracked the eyes of four ESL students as they read passages with errors and found that they spent an extra 500 milliseconds on errors that they noticed. (Some learners are not ready to notice some errors. The participants in the study did not pause on those errors.)
The study was interesting, but the hardware Jeff built to do the study was completely captivating to me. He started by removing the infrared filter from a web cam and mounting it to a bike helmet using a piece of scrap metal, some rubber bands and zip ties. Then he made a couple of infrared LED arrays to shine infrared light towards the eyes being tracked. As that light is reflected by the eyes, it is picked up by the webcam, and translated into data by the free, open-source Ogama Gaze Tracker.
So, instead of acquiring access to a specialized eye-tracking station costing thousands of dollars, Jeff has built a similar device for a little over a hundred bucks, most of which went to the infrared LED arrays. With a handful of these devices deployed, almost anyone could gather a large volume of eye-tracking data quickly and cheaply.
Incidentally, if you are thinking that there are a few similarities between this project and the wii-based interactive whiteboard, a personal favorite, there are several: Both cut the price of hardware by a factor of at least ten and probably closer to one hundred, both use free open-source software, both use infrared LEDs (though this point is mostly a coincidence), both have ties to gaming (the interactive whiteboard is based on a Nintendo controller; eye-tracking software is being used and refined by gamers to select targets in first-person shooters), and both are excellent examples of the ethos of edupunk, which embraces a DIY approach to education.
Do you know of other interesting edupunk projects? Leave a comment.
#edtech #esl YouTube annotations provide a discussion space layered onto each video.
In my previous post, Interactive Videos, I shared some examples of YouTube videos that incorporate some new interactive features of the site that overlay buttons and links that can take you to a different segment of the video or to a different video or website entirely.
These kinds of pop-up messages have been crowding onto YouTube videos since this feature became available. If used gratuitously, they are annoying, but when used to add supplemental information, they can be quite useful. As one example, take a look at the video tutorial for making the above image. It’s a straightforward and informative two-minute video. At about the 1:30 mark, some red text appears that seems to be essential information that was omitted in the original shooting of the video. Adding a quick note is a simple solution that does not require reshooting the video.
But there must be more we can do with these tools. I’d been thinking about some different ways to incorporate these techniques when I came across a presentation made by Craig Howard at the Indiana University Foreign / Second Language Share Fair. The page includes a recording of the presentation, a handout that summarizes how to annotate YouTube videos, and a link to an example video, which I’ve included below.
The nice thing about this approach is that a video, in this case a video for teachers-in-training to discuss, can include the online conversation layered right over top of the video. Comments by different speakers can be made in different colors and the length of time they are displayed can easily be adjusted as appropriate. Of course, everyone involved needs to have free Google or Gmail accounts to sign in, and the video must be configured to allow annotations by people other than the person who uploaded it.
The ability to integrate video materials and online discussion so seamlessly opens up some interesting potential for interacting with videos in new and interesting ways. I’ve recently looked at some options for online bulletin boards / sticky notes, including Google Docs, but incorporating this style of discussion directly onto the video is fantastic.
I’m still kicking around different options for making YouTube videos more interactive. If you have other examples or ideas, please share them in the comments below.
When I hear the phrase interactive videos, I think of people covered in florescent mocap pingpong balls or choppy, Choose Your Own Adventure-style stories like Dragon’s Lair. And there are those. But, it seems that some creative tinkerers have pushed the envelope with some of YouTube’s interactive features and come up with some interesting results.
How can they be used with ESL and EFL students? Well, in addition to viewing and interacting with the videos and then discussing or reporting on the experience, students could be challenged to determine how the videos were made. For the more ambitious, students could make their own videos using the same techniques. Some of them, like the Oscars find the difference photo challenge would be relatively easy to remake.
Ever stare out into a roomful of your students’ faces as you explain the role of the comma in differentiating restrictive and non-restrictive adjective clauses? I have. After a few terms, I began to wonder whether those blank stares indicated that students were overwhelmed by the topic, or bored because they already understood this material and couldn’t wait to move on, or were just plain bored (though I was pretty confident the latter was true.)
I thought it would be great if we teachers could adopt the same technology that the network news teams use when they take a roomful of average citizens and make them watch debates with a dial in their hand. By turning the dial left when they are happy and right when they are not, an average response is displayed in a graph that scrolls across the bottom of the screen. Wouldn’t it be great if students could dial between “I don’t understand. Slow down.” and “I get it. Move on.”? For now, we must make do with the analog, “Any questions?”
Getting live feedback can be very useful in the classroom. Poll Everywhere is a website that makes creating live polls extremely easy. With a free account, you can create a poll that allows up to 30 responses by web, text message, smartphone or Twitter. You can even download your poll on a PowerPoint slide, which you can use to observe the results as they roll in. More features are available for paid accounts.
Polls are very easy to set up, but there are lots of good online tutorials out there, including this one by Sue Frantz. These kinds of polls can do a great job of gathering instant feedback from your students using technology they likely already have with them (instead of requiring them to purchase Clickers, devices with only one function.) Whether asking students if they the pace of the class is appropriate or checking comprehension of content, Poll Everywhere is an extremely flexible tool that can be used in a wide variety of situations.
To respond to this poll, text the code for your response to 37607, tweet the code to @poll, submit the code to http://poll4.com, or use the web form to make your selection. View results.
I’m not extremely fluent in all of these technologies (for more info, see Flight of the Navigator), but as a demo, this is pretty impressive. To me, it looks a little like Second Life with tons of screens out to the internet. In other words, slick and different, but I’m not sure how useful, or even how truly integrated this experience would be. Would you rather navigate to different places on the Web by moving through a 3D space or by Ctrl-Tabbing to the next open tab in your browser? Maybe I’m old-school, but the latter seems far easier to me.
Of course, there are lots of other demos posted online and it will be interesting to see where this goes. Checking your favorite Twitter feeds in-game would certainly blur the line between the gaming experience and the real world, but is this necessary? Probably not, but maybe that’s not the question to be asking with whiz-bang technology like this. It certainly opens up interesting avenues for the greater integration of a wide range of technologies. Where that takes us will be interesting to see.