Friday, July 29, 2005

Video Game Technology – To State-of-the-Art and Beyond

Since the advent of the modern day personal computer, video game technology has provided a driving force pushing the state of the art in technology. Graphics, audio, file storage, processing power, memory, and more are pushed to the limits by video games. While I don’t play video games, I certainly enjoy the benefits of the hardware and software platforms built to meet the demand of video games. In the 1980’s during the early days of virtual instrumentation, we used to thank the makers of “Donkey Kong”, a popular video game at the time, because they pushed the PC to a more powerful platform which made it more usable for scientific and engineering applications. Without the volume of users (there are more video gamers than scientists and engineers out there) the PC wouldn’t have developed as fast or come as far.

For those who haven’t followed the video game industry, it’s now bigger than the movie industry. The video game industry generates $30B per year and grows at 20% per year compared to the $20B that movies make. By 2008 it will be a $55B industry. This industry is not lost on the big players in the computer industry. Aside from the Xbox initiative, Microsoft is plowing funding research into the technologies behind programming games. Microsoft recently awarded $480k in awards as part of their Computing Gaming Curriculum program.

Speech recognition is one area that video games now drive. Speech recognition has been under research for many years but has seen acceptance in only a few niche applications. IBM came out with a speech recognition tool a few years back that found some adopters, but mostly yawns. Several healthcare startups tried to implement speech recognition for doctors transcribing patient information most of which failed because a substantial number of doctors speak with an accent that throws off the software. With the video gamers adopting speech recognition tools we’ll see a jump in the performance and functionality of those tools.

The Graphical User Interface is another example of state of the art technology driven by the video game world that can be adapted to other means such as medical and industrial applications.

The gaming industry is pushing the boundaries of the processor as well. Please review my post on the Cell Processor, where I outline the features of the next generation processor.

Cellphones can record not only your calls but also your location both past and present. Without going into the social implications of someone having access to your whereabouts, technology for tracking the location of things has many uses.
While the mobile gaming industry gears up to use this technology to improve the gaming experience, virtual instrumentation can adopt this technology for wireless sensor networks which seeks to identify the position of sensors which may be in a mobile state.

While I don’t play video games myself, I certainly enjoy the benefits they are bringing to virtual instrumentation in the form of better, faster, and cheaper computational tools.

If you are working with video game technology, I would like to hear from you. Please email me at
Best regards,
Hall T. Martin

Friday, July 22, 2005

Battery Technology – Even Nuclear Options are in the Game

Many applications in the world of virtual instrumentation require battery power. The limitations of current battery technology require manual processes (e.g. replacing batteries) and provide limited solutions -- the battery lasts only so long. Emerging technologies in batteries hold some promise. Even nuclear options are under research. Direct Energy Cell (DEC) conversion generates power from the decay of a radio element source. It uses a single step capture-and-conversion technique rather than an electrochemical process. Tritium is one example which has a half life of over 12 years. It would work well for wireless sensor networks in hard to reach locations since you wouldn’t have to change the battery but once a decade if even that. While it’s not a good fit for mass market consumer devices due to cost and obvious safety issues, it would fit well for industrial applications which have more stringent requirements such as decade-long lifetimes.

Microcombusion technology generates heat by burning small amounts of liquid hydrocarbons. The heat is then converted to electricity by other means. This technique packs a lot of energy into a very small package.

Other advances include polymer batteries which use a different type of electrolyte. This allows for ion exchange but does not conduct electricity. Most batteries using this technology actually form a hybrid between the old gel-based electrolyte and the new one that contains no gel. The polymer base allows for molding the battery into unique shapes and forms which is useful for portable applications which may not accommodate fixed shape batteries.

Another vector for battery innovation is packaging. Power Paper makes batteries from a flexible material made of polymer that is embedded with a propriety ink using a mass-printing technology. It produces a 1.5V battery that can be used in many materials such as band-aids, paper, plastic and other materials and can form any shape.

Smart batteries come with additional circuitry that indicates the state of charge and the state of health of the battery. This is useful for the user who needs to know the condition of the battery. There’s a simple one-wire version that brings the data out to a chip for measurement, and there’s also a more sophisticated SMbus version which provides a data communication bus complete with protocol standards set by Intel and Duracell back in the early 90s. This allows the battery to take charge control over from the charger and lets the battery determine how it will be recharged.

The first generation of advanced power supplies, such as lithium-ion batteries, will soon be replaced by fuel cells which do not require replacement or recharging and have high output densities. Size is also an important factor to consider, as more advanced equipment requires the use of more components and limitations on real estate (i.e. size of the entire finished product). There are numerous examples. From MTI comes a Micro Fuel Cell technology using DMFC which is a Direct Methanol technique. It’s not yet a shipping product, but appears to be in prototype stage. Manhattan Scientifics also makes a methanol-based Micro Fuel Cell. Their technology can produce 3 times the power of Lithium battery with the goal of achieving a 20x improvement.

Super capacitors store energy as a static charge rather than an electrochemical process. They are a cross between traditional capacitors and batteries. Current capacitors have a limited capability to store energy, like existing lithium ion batteries, and have very limited capacities to deliver that energy quickly and efficiently. Applications such as starting a car require a surge of electricity. Supercapacitors combine battery-like energy storage with capacitor-like power discharge, and can be used either standalone or in conjunction with a battery. Nanotechnology, in particular carbon nanotubes, offers improved materials for supercapacitors.

If you are seeking a battery solution for your virtual instrumentation application, here’s an article discussing the basics.

If you are working with battery technology, I would like to hear from you. Please email me at
Best regards,
Hall T. Martin

Friday, July 15, 2005

Web Services – Slowly it Comes

Web Services continue to generate press coverage with a continuous wave of tools offered up but how fast are users adopting those tools? It seems like user adoption is moving rather slowly. The current flavor of the month is Ajax. It consists of several technologies rolled into one including a standards-based presentation using XHTML, dynamic display using Document Object Model, data interchange using XML, and asynchronous data retrieval.

For those unfamiliar with Web Services it provides exchange of data between applications across the web. OASIS and W3C provide the standards which are still evolving.

In this primer on Web Services the author attributes the success of the web to its simplicity and ubiquity which contrasts to the slow adoption of web services due to the lack of standards. Microsoft’s Passport effort crashed and burned because of the use of proprietary tools and the general lack of standards.

Also, Web Services still reside in the world of IT. It has yet to move into the non-computer programmer space the way other programs have like Excel. This is an interesting area to work in. One company doing so is StrikeIron which released a product that brings the use of Web Services to the non-programmer.

Another reason for the slow adoption rate is the lack of security and transaction management built into the architecture. With the increasing rise of virus and other attacks, security is now a top issue in most web-based application programs.

Finally, the cost of moving to a new programming language slows the adoption rate. Shifting from traditional programming languages into Service Oriented ones comes at a price. The price depends on a programmer’s current skills. A C++ programmer can be retrained to work with SOA applications in 5 months at a cost of $23,000 according to a Gartner study. A COBOL programmer on the other hand will require 12 months of retraining at a cost of $67,000. SOA languages make heavy reuse of components, so programmers familiar with object-oriented techniques have a lower hurdle to overcome.

If you are working with Web Services, I would like to hear from you. Please email me at
Best regards,
Hall T. Martin