Friday, May 27, 2005

TRIZ – Inventive Theory of Problem Solving

What is TRIZ? It’s a Russian acronym for an Inventive Theory of Problem Solving.

The idea behind TRIZ is that there exists a universal series of principles for problem solving (read invention) and that these principles can be codified into a set of steps that can apply to any problem. It was developed by a Russian named Altshuller, who headed the Russian patent office in the 1940s and 50s. He examined over 2 million patents and classified the problem solving he found among the various entries. He drew three principles from his investigation:
1. Problems and solutions were repeated across industries and sciences.
2. Patterns of technical evolution were repeated across industries and sciences.
3. Innovations used scientific effects outside the field where they were developed.

Everything that performs a function is a technical system. A piece of chalk, and a blackboard alone do not comprise a system, but if we combine chalk, a board, and an applied force, then we have a system – albeit it a simple one. Other examples of systems include Transportation consisting of cars, roads, maps, drivers, and service stations, or Cars consisting of power train, brakes, heating, steering, and electrical.

There are different levels of innovation. Altshuller found some patents more inventive than others and proposed five levels of innovation:
1. A simple improvement on a technical system.
2. An invention that includes the resolution of a technical contradiction.
3. An invention containing a resolution of a physical contradiction.
4. A new technology is developed containing a breakthrough solution.
5. Discovery of new phenomena.

TRIZ introduces the concept of Ideality. The Law of Ideality states that any technical system, throughout its lifetime, tends to become more reliable, simple, and effective – more ideal. Highly complex systems are further away from their ideality than simple systems.

When improving the design of a system, contradictions appear. Improving the system in one area causes the system to deteriorate in another area. For example, increasing the power of an engine may increase the weight of the engine.

Based on his review of patents, Altshuller found that technical systems developed in certain ways. From this observation he drew up a list of 8 development patterns:
1. Life cycle
2. Dynamization
3. Multiplication cycle
4. Transition from macro to micro level
5. Synchronization
6. Scaling up or down
7. Uneven development of parts
8. Replacement of human

Altshuller created a list of 40 principles or techniques for eliminating contradictions. As one improves one facet of a system, it contradicts or comes in conflict with another facet. The 40 principles gives the innovator a list of techniques for resolving the contradiction. For example, Periodic Action is one principle. An example of its use is in watering a lawn. A continuous spray of water can damage the lawn, while a pulsating action can deliver water without damage.

Finally, TRIZ puts forth a methodology for innovation called ARIZ (Algorithm to solve an inventive problem). It’s the central analytical tool and provides detailed steps for solving a problem. There are nine steps:
1. Analysis of the problem – simple statement of the problem.
2. Analysis of the problem’s model – focus on the area of conflict
3. Formulation of the Ideal Final Result (IFR) – physical conflict is resolved.
4. Utilization of outside substances and field resources – resolves unclear problems.
5. Utilization of informational data bank – applies “Standards” to solve it
6. Change or reformulate the problem – if a problem is still not found, return to step #1 and look at the problem in a different way.
7. Analysis of the method that removed the Physical Contradiction – check the quality of the solution if one is found.
8. Utilization of found solution – analyze the effects of the new system.
9. Analysis of steps that lead to the solution – checkpoint to compare the solution with other ARIZ solutions.

Most of the example cases apply to hardware systems but how does it apply to software systems? Quite well as a matter of fact, it works quite well. Each of the 40 principles has a software analog. This article provides a more complete list of software analogs. For example, prior action is analogous to pre-compiling, and periodic action is analogous to scheduling algorithms.

There are many examples of how to apply these techniques. This case study gives one the flavor of how this approach works.

For detailed examples of each of these concepts check out this link.

If you are working with TRIZ, I would like to hear from you. Please email me at hall.martin@ni.com.

Best regards,
Hall T. Martin

Friday, May 20, 2005

Next Generation Computing Architectures – Multicore processors are here

As single core architectures reach diminishing returns, multicore architectures are taking over. For many years now, chip vendors increased the performance of their processors by shrinking the size and increasing the number of transistors on the chip. This approach is reaching diminishing returns as the total power consumption is outstripping the growth in performance of the chip. Also, the ability to add more functionality to a single core is reaching the limits of processor clock cycle times. Heat dissipation is also a driving factor. To give you an idea of the heat dissipation possible check out this story about the Pentium 4.

Here’s a short overview of multicore contrasted with single core processors.

On the marketing front, AMD is out in front in the area of dual-core processors. Their 64-bit Opteron chip is shipping now, while Intel won’t ship their dual-core for servers until next year. While Intel continues to maintain a large lead, AMD is making some gains in market share. Another story indicates Intel will have multicore processors on desktops, workstations, mobile computers and servers by the end of this year. For those who want to see an historical view of the battle between Intel and AMD, here’s a blog that describes it in gritty detail.

I recently attended a lecture by Norm Jouppi, an HP Fellow at HP labs working on next generation processors. In it he described the evolution of the processor and how performance gains varied based on factors other than the number of transistors. The earliest processors up through the Intel 386 achieved performance increases on the order of N to the 3rd, while the Intel 486 through the Pentium 4 achieved performance increases of N to the 2nd. After that point processors actually achieved no significant performance increase due to the increased complexity of the chips which is one reason chip manufacturers are moving from single core processors to multicore processors. Other factors include voltage scaling issues, environment constraints, clock frequency, and power. He went on to describe how next generation chips with multicore processors will issue in the “Golden Age of Parallelism.”

He believes that all applications will become parallel in nature. He went on to speculate that old chip architectures such as the Vector processor used in supercomputers of the ‘70s and ‘80s may see a come back. The trend to simpler processors will bring shallower pipelines, and reduce speculation but not eliminate it. Speculation is guessing what branch the process might take and scheduling it rather than waiting.

You can see his current work in the following paper in which he described an architecture for a multicore processor in which an intelligent division of functions among various cores can bring about increased energy performance with a minimum impact on processing power.

How does this impact the programmer? As new processors move to multi core architectures, thread processing becomes important for hiding memory latency in I/O intensive applications. Instead of the processor sitting idle waiting for data to come in, the processor can work on other computational tasks. As this blog tells us, it’s time to make use of multithreaded programming skills. He goes on to say that he believes the next generation OS will take care of the head aches that come with multicore processing, and encourages programmers to use threads carefully. For those programming in LabVIEW, you have access to multithreading and LabVIEW makes parallel processing easier than most other languages.

If you are working with multicore processors, I would like to hear from you. Please email me at hall.martin@ni.com.

Best regards,
Hall T. Martin

Friday, May 13, 2005

Smart Cameras – What are they?

Smart cameras are becoming more prevalent. For those who don’t know what they are Evaluation Engineering provides a good overview. Essentially, they are stand alone, self-contained (not connected to a PC) imaging systems, which generally use CCD or CMOS technology for sensing and have built-in connectivity such as Ethernet.

Vision sensors is another term for smart cameras and sometimes refers to low-end smart cameras in that they can take the place of other kinds of sensors – like bar code readers.

Customers are interested in smart cameras because they provide:
Lower cost by replacing complete PC-based imaging systems
Greater reliability
Distributed processing rather than central processing
Adaptation to a diverse set of sensors (light, optical, etc).

Example applications include 2D/3D, bar code, gauging, and pattern recognition. Some real-world industry examples include Nissan putting smart cameras into their cars to determine if an airbag should be deployed based on the car’s position relative to oncoming objects.

Johns Hopkins uses smart cameras for video surveillance in response to the death of two students. The cameras allow the university to monitor the campus without putting a guard on every corner.

One drawback to some smart cameras is that they can run slower. The processor is weighed down with additional algorithms and tasks which limits the performance of the device. Another potential drawback is lowered flexibility. As the camera becomes “smart” it becomes more dedicated to a specific set of tasks and is not as flexible as it once was.

If you are working with Smart Cameras, I would like to hear from you. Please email me at hall.martin@ni.com.

Best regards,
Hall T. Martin

Friday, May 06, 2005

The Other Short-range Wireless Communications --- UltraWideBand and Near Field Communication

In addition to WLAN, Bluetooth, and Zigbee, two other short-range wireless communication efforts are gaining interest. The first is called UltraWideBand (UWB). Ultrawideband overlays onto existing spectrum (almost 7.5 GHz) and can theoretically transmit data at 500 Mbps. It operates on low-power CMOS devices at a relatively low cost. It uses a series of short electrical pulses through band filtering to an antenna. To prevent interference with other signals, its signal power is limited based on the application.

Wireless Data Facts predicts UltraWideBand nodes and chipsets will grow at 400% between 2005 and 2008. The Bluetooth SIG is expected to announce an effort to make Bluetooth compatible with Ultrawideband. A joint Bluetooth/UWB solution would give users an increased data rate over standalone Bluetooth which is currently limited to 3 Mbps.

For market acceptance, the story is mixed. On the up side Microsoft recently joined the WiMedia Alliance . The WiMedia camp is concentrating upon eliminating the wires used in USB and IEEE 1394 devices. Microsoft’s participation brings new firepower to the effort. On the downside, standards battles could keep UWB from taking off. There are over 20 proposals on the table.

The other short-range wireless effort has been around for awhile and is called
Near Field Communications. The NFC Consortium lead by
Nokia, Philips and Sony,
proposes a standard for consumer electronic devices that allow two devices to exchange information with little or no effort by the user. Networking in the computer world can be done in a more elaborate way, but in the consumer electronic world, less is more. To initiate an NFC communication the user need only “touch” the two devices together.

A whitepaper by the NFC consortium outlines several use cases including transferring photos from a digital camera to a digital television set. If you want to take information from your computer to a PDA so you can take it with you on the road, you need only touch your PDA to the computer and the information is uploaded.

A short tutorial indicates that NFC operates on an inductive RF link at 13.56 MHz and at a close range of 20 cm or less. Data rates for NFC are estimated at 1 Mbps. It targets device-to-device communications (e.g. mobile phone to PC), and promises ease of use, as the user need only hold two NFC-enable devices close together to enable the communication. One device is the initiator and the other is the target. Any device can be either initiator or target. Here’s a web-site with more technical specs.

NFC holds out the added promise that it could make money for the communications operators who might replace Bluetooth with it in the next generation phones. Here’s more on the story.

If you are working in wireless communications, I would like to hear from you. Please email me at hall.martin@ni.com.

Best regards,
Hall T. Martin