Graphical User Interfaces—Semanticons Anyone?
In my last post about graphical user interfaces, the state of the art was moving into 3-D representations with the goal of making more realistic interfaces.
Technology Research News ran an article in December about the use of photos for making files and folders more meaningful. Northwestern developed a system called Semanticons that examines the contents of a file, and generates a set of keywords. It then does a lookup on images associated with those keywords and creates a composite image that represents the files’ contents.
Imagine a LabVIEW program in which the icons used a similar concept to create a visual cue indicating the contents of a user created virtual instrument. The “semanticon” could indicate the icon’s functionality (i.e. acquiring data, analyzing data, converting data, or displaying data). Or it could be used to indicate the virtual instruments position in the hierarchy (i.e. primitive, mid-level, or higher-level icon). So for the LabVIEW community I ask – Semanticons anyone?
GUIs have come a long way in the last 25 years. For a walk down memory lane check out this timeline. In looking forward, Gartner predicts that the current GUI standards will remain in place until the year 2010, at which point it will shift to a new generation driven by the emergence of handheld computing, and a shift in users from early adopters to more mainstream users.
In this MIT paper the authors blur the distinction between input devices (keyboard and mouse) and output devices (monitors and touch screens) by creating a new concept called “tangible user interfaces.” Drawing inspiration from the abacus which has no input or output distinction but only a physical representation, the authors contend the next generation of GUIs will be tangible objects imbued with computational control. At the Siggraph 2006 conference the authors presented this concept as “Tangibles at Play” in which they shift the graphical representation from a screen to a set of physical objects.
Other techniques for working with graphical user interfaces involve the systems’ ability to recognize the user’s gesture or motion. Gesture recognition uses a camera to view the user’s hand and converts the position and movement into commands for the computer. MIT created the “Conductor’s Jacket” controlled by LabVIEW. The system converts a person’s movement into music by connecting feeding signals through a MIDI interface into a music synthesizer.
Another example of gesture recognition is multi-touch by Jeff Han’s team at NYU which senses the position and movement of the users fingers and reacts accordingly. In the associated video you can see how the screen reacts to the pressure and touch from multiple points which are far beyond the one-touch screens we use today.
It seems clear that graphical user interfaces are going to shift not just to 3-D representations on the screen but will shift to the real world of 3D with the screen disappearing altogether.
Best regards,
Hall T.
Technology Research News ran an article in December about the use of photos for making files and folders more meaningful. Northwestern developed a system called Semanticons that examines the contents of a file, and generates a set of keywords. It then does a lookup on images associated with those keywords and creates a composite image that represents the files’ contents.
Imagine a LabVIEW program in which the icons used a similar concept to create a visual cue indicating the contents of a user created virtual instrument. The “semanticon” could indicate the icon’s functionality (i.e. acquiring data, analyzing data, converting data, or displaying data). Or it could be used to indicate the virtual instruments position in the hierarchy (i.e. primitive, mid-level, or higher-level icon). So for the LabVIEW community I ask – Semanticons anyone?
GUIs have come a long way in the last 25 years. For a walk down memory lane check out this timeline. In looking forward, Gartner predicts that the current GUI standards will remain in place until the year 2010, at which point it will shift to a new generation driven by the emergence of handheld computing, and a shift in users from early adopters to more mainstream users.
In this MIT paper the authors blur the distinction between input devices (keyboard and mouse) and output devices (monitors and touch screens) by creating a new concept called “tangible user interfaces.” Drawing inspiration from the abacus which has no input or output distinction but only a physical representation, the authors contend the next generation of GUIs will be tangible objects imbued with computational control. At the Siggraph 2006 conference the authors presented this concept as “Tangibles at Play” in which they shift the graphical representation from a screen to a set of physical objects.
Other techniques for working with graphical user interfaces involve the systems’ ability to recognize the user’s gesture or motion. Gesture recognition uses a camera to view the user’s hand and converts the position and movement into commands for the computer. MIT created the “Conductor’s Jacket” controlled by LabVIEW. The system converts a person’s movement into music by connecting feeding signals through a MIDI interface into a music synthesizer.
Another example of gesture recognition is multi-touch by Jeff Han’s team at NYU which senses the position and movement of the users fingers and reacts accordingly. In the associated video you can see how the screen reacts to the pressure and touch from multiple points which are far beyond the one-touch screens we use today.
It seems clear that graphical user interfaces are going to shift not just to 3-D representations on the screen but will shift to the real world of 3D with the screen disappearing altogether.
Best regards,
Hall T.
2 Comments:
Ray-Ban
one day sale
List:€125.00
Price€19.99
Ray-Ban
hermes belt
true religion jeans
air jordans
kobe 9
michael kors outlet online
atlanta falcons jersey
yeezy boost 350 v2
kobe 9
kobe 11
jordan shoes
Post a Comment
<< Home