A quite interesting demonstration of a new interface being researched in MIT’s Media Lab. I say "quite interesting" because despite the rapturous reception of a totally uncritical audience, it’s clear that there’s a long way to go in development, and many of the applications demoed beg an awful lot of questions about how practical this would be.
The gesture interface in Minority Report is actually a lot easier to do than this one. For example, does the projector have an autofocus capability that will track the surfaces being used (hands, wrists, newspapers, wall, books)? Some of the mode changes also stretch credulity more than a little – for example, I expect that drawing a wristwatch on the back of one’s wrist will use the same gesture as used elsewhere in other applications – so the whole issue of mode changes is being skipped over in these demos. Still, as I say, it remains a quite interesting piece of research. Just don’t expect real-world results to appear quickly.

Leave a comment