Graphical User Interfaces – Moving to Gesture Recognition
I’ve maintained for a long time now, that the video game industry is the one to watch. As Edison said, good engineers develop, but great engineers steal. We should be stealing from the video game industry. With their high resolution graphics and use of 3D techniques, it appears to be the place to shop for new technology. But there are other technologies to consider, such as gesture recognition.
Gesture recognition is not new. In fact it goes back to 1964 when ARPA funded the RAND tablet. If you can remember the RAND corporation you’re really dating yourself. Since it’s been around for so long why hasn’t it penetrated further into usage? Like most emerging technologies, it’s not the technology itself that defines its success but rather how well it works with other technologies and how many other technologies it requires to run well.
One application of gesture recognition is demonstrated in a paper from MIT which used the kitchen as the wall space and then overlayed digital information on it for the user to manipulate. They used multiple projectors and actually moved the projection in case the object they projected on moved, such as a table. Also, the image changed based on the task the user was performing. Applications include showing the contents of the refrigerator on the outside of the door noting the items that need to be purchased. A display on the dishwasher shows the state of the contents – dirty or clean. An application called Heatsink measures the temperature of water coming from the tap and then projects a color on the water stream to indicate that to the user.
In the case study from MIT in which they instrumented a kitchen, it’s clear that a whole range of supporting technologies is required to make gesture recognition work including image processing, temperature sensing, proximity sensing, object tracking, and more.
While MIT may be instrumenting kitchens to test out the technology, others are actively installing gesture recognition systems in your local mall. The one in the mall nearest my home is called Reactrix which consists of a ceiling mounted projector that displays vendor brands on the floor of the mall along with games. Kids are enticed to participate with the various games and questionnaires. The projector “reads” the movements of the kid on the projected image and provides a reaction to it. For example, a soccer field is displayed with a ball in the middle. As the kid tries to kick the image of the ball, the Reactrix sends the ball floating across the screen, thus encouraging the kid to pursue the ball. To see this in action for yourself you can click on the Reactrix web site and see a map of the USA with over 160 locations.
As with all emerging technologies there are bugs to work out of the system. For gesture-recognition it comes down to interpreting the gestures the user makes. In this article, a number of companies use heuristics and other algorithms to interpret a user’s movements.
Wouldn’t it be cool to take a LabVIEW front panel and project it on the wall but then enable the user to physically turn the knob or toggle the switch or even tap twice on a VI and have its front panel open up?
Best regards,
Hall T.