What is Gesture-Based Computing?


Thanks in part to the Nintendo Wii, the Apple iPhone and the iPad, many people now have some immediate experience with gesture-based computing as a means for interacting with a computer. The proliferation of games and devices that incorporate easy and intuitive gestural interactions will certainly continue, bringing with it a new era of user interface design that moves well beyond the keyboard and mouse. While the full realization of the potential of gesture-based computing remains several years away, especially in education, its significance cannot be underestimated, especially for a new generation of students accustomed to touching, tapping, swiping, jumping, and moving as a means of engaging with information.

It’s almost a cliché to say it, but the first exposure to gesture-based computing for many people may have occurred over a decade ago when they saw Tom Cruise in Minority Report swatting information around in front of him by swinging his arms. The fact that John Underkoffler, who designed the movie’s fictional interface, presented a non-fiction version of it, called the G-Speak, in a TED Talk in 2010, fittingly asserts the growing relevance and promise of gesture-based computing. The G-Speak tracks hand movements and allows users to manipulate 3D objects in space. This device, as well as SixthSense, which was developed by Pranav Mistry while at the MIT Media Lab and uses visual markers and gesture recognition to allow interaction with real-time information, has ignited the cultural imagination regarding the implications for gesture-based computing. This imagination is further fueled by the Kinect system for the Xbox, which continues to explore the potential of human movement in gaming. In short, gesture-based computing is moving from fictional fantasy to lived experience.

INSTRUCTIONS: Enter your responses to the questions below. This is most easily done by moving your cursor to the end of the last item and pressing RETURN to create a new bullet point. Please include URLs whenever you can (full URLs will automatically be turned into hyperlinks; please type them out rather than using the linking tools in the toolbar).

Please "sign" your contributions by marking with the code of 4 tildes (~) in a row so that we can follow up with you if we need additional information or leads to examples- this produces a signature when the page is updated, like this: - alan alan Jan 27, 2010

(1) How might this technology be relevant to the educational sector you know best?

  • One of the difficulties with many institutional systems is the compexity of operation. Learners spend too much time and cognitive energy working out how to navigate poorly designed MLEs, when they should be concentrating on learning. Natural gesture technology will open up new opportunities for learning because the technology will be more transparent, enabling students to focus more on learning and less on technology. - steve.wheeler steve.wheeler
  • another response here

(2) What themes are missing from the above description that you think are important?

  • There is currently some work around facial recognition, with implications for intelligent sensing of boredom, confusion, curiosity, etc. - steve.wheeler steve.wheeler
  • another response here

(3) What do you see as the potential impact of this technology on teaching, learning, research or information management within the next five years?

  • Natural gestures such as the 6th sense 'image capture' gesture could open up new areas for creativity to be unleashed, because learners will be able to interact more through the technology with their natural learning environments. - steve.wheeler steve.wheeler
  • another response here

(4) Do you have or know of a project working in this area?

Please share information about related projects in our Horizon Project sharing form.