Wednesday, November 22, 2006

My 100th Post – A Time to Reflect

This is my 100th posting. For some bloggers, in particular those who post ten to twelve times a day and usually in stream of consciousness fashion, they passed the 100 mark in only a few weeks. I took an alternative approach – choose topics carefully, spend a week researching it, and then post once a week.

When I first started blogging two years ago just about every blogger out there spent their time explaining blogging itself so I promised myself not to follow suit since it was a well worn out topic. I saw blogging as a way of archiving and sharing my interactions with key users in the measurement and automation world. The idea of blogging didn’t seem revolutionary but rather evolutionary in the development of the internet.

In the early days of my posting, I focused on emerging technologies and what they were all about. I tried to share links and tutorial information on newer topics. This gained a small following primarily because most of the information is already on the web and only a few mouse clicks away. After the first year, I shifted strategy and focused on the people behind the technologies and found a whole new world opening up. About that time podcasting came into mainstream usage and I noticed that the podcasts I found most interesting were interviews with people working on familiar products or services, but those who you never heard from. For example, one always hears from Steve Jobs at Apple, but I really want to hear from the team who designed the Ipod.

For the past 100 posts, I received numerous emails and comments regarding the blog. I want to thank those who took time to not only read the blog, but also send in a word as well. Have a Happy Holiday season.

Best regards,
Hall T.

Friday, November 17, 2006

David Carey uses Wireless Sensor Networks for Flood Plain Monitoring

One of our lead users for the Wireless Sensor Networks application is David Carey who performs research at the Wilkes University in Pennsylvania. I had a chance to speak with him today about his research.

How are you using Wireless Sensor Networks?

The WSN will be stationed along the flood protection system in Wilkes-Barre, PA to monitor the condition of the levy system. Ultimately this will lead to an early warning system for the flood protection system and a real time levy health monitoring system. The WSN will be configured to monitor water level, vibration and pressure that the river exerts on the levy.

What are current solutions and why do they not work?

Currently the system that monitors the levy is a hardwired system and covers a limited area along the protected zone. The system also only looks at pressure and water level. A more detailed understanding of hydrodynamics is required but the 60,000 foot view follows:

The current system has a limited number of pressure sensors. They are spaced along the levy offering partial coverage. If a breech occurs upstream, the pressure wave as seen by the sensors will indicate this but only minutes prior to the event hitting the system. The current system does not offer enough early warning that a potentially catastrophic event is on its way.

How rugged must the system be to survive an outdoor environmental application?

The sensors need to be mounted in a weather and water tight box. They must be able to withstand the swing in temperature and the elements of rain and snow seen in northeast PA. There is a potential that the WSN will be submerged in water. This is only for a short time and the sensor will not be expected to transmit during this time. It will however be expected to collect data for later transmission. Some WSN nodes will be mounted in subterranean conduits (pipes). For them to transmit data out a drone vehicle has been proposed to patrol the piping system to collect data and service the nodes as required.

What is the biggest advantage in using wireless sensor networks?

The greatest advantage is in the flexibility of placement of the sensor nodes. The network can be customized in placement to the levy system and will not require the laying of long cables.

What is the biggest challenge in using wireless sensor networks?

There are several challenges. The device must survive in the environment. This can be handled through packaging. Transmitter efficiency and reliability in the environment is the second problem. The devices will be mounted above and below ground. The two areas will not be in contact with each other. It must be determined how to link the two environments together. Additional challenges having nothing to do with being a WSN: First is data fusion of the vibration and pressure sensors to detect an event early enough to be of service. The current sensor package might not have enough onboard memory to allow the level of processing required to make this effective. Finally, time is an issue. In the academic environment where teaching of classes is of higher priority than research it will be difficult to get everything completed by the January deadline.

What is your research focused on?

Experimentation has been conducted to examine the throughput and longevity of the sensors through various data sampling rates, loads and configurations. A report on the transmit rates VS battery life is being drafted. Additional work is being done on reliability of transmission. The sensors are being placed at various distances and the signal strength is being monitored by looking at the data throughput. In addition data latency is being examined. All of this is to go towards quantifying the sensors ability to perform and this will help determine the overall structure of the final network.

Signal processing and analysis is being examined. Various levels of onboard sensor processing routines are being developed to reduce the data transmission load. The bulk of the work will be in the feature identification and extraction from the sensor signals. It will then be determined if the fusion process can be embedded into the WSN or will it have to be offloaded.

Additional analysis is being performed through river and hydrodynamic simulation. Wilkes University uses a software package in their Environmental Engineering Program. The simulation is being used to determine the effects of various events on the levy. This is going to help verify if the vibration/pressure package is going to provide the early warning required.

What is the role of software in your application?

The embedded application will sample the sensors and perform some analysis to determine if an early warning event on the levy has occurred. The application that is interfaced to the coordinator will monitor all sensors and provide real-time up-to-the-minute health status of the levy.

What is your next application challenge?

My next big hurdle is in streamlining the embedded analysis routine to fit on the sensor. The features I need to monitor are in the frequency domain. I need to reduce the number of samples I am using for the frequency analysis. The nice thing is the events that need to be identified are in the low frequency range. This means that the sampling rate is not too high.

Best regards,
Hall T.

Friday, November 10, 2006

John Chapin of Vanu Discusses Software Defined Radio

Last week I blogged on Vanu, a software defined radio vendor who was the first to receive FCC approval for their base station. Vanu’s lead technologist is John Chapin who earned his degrees from Stanford and worked at MIT as an assistant professor. John visited NI Week last August to speak at the RF Summit. I caught up with John this week to learn more about Vanu’s technology.

Vanu’s work comes from the SpectrumWare Radio project at MIT. SpectrumWare was the leading academic SDR project in the 1990s. It used off the shelf computer platforms such as the Intel X86 rather than DSPs as the Software Radio platform. Vanu was founded in 1998 and today has nearly 10 years of research. Here’s the discussion we had:

The lowest layer in your architecture is the antenna. What do you think about MIMO? Vanu’s technology applies equally well to MIMO as to a single antennas but MIMO costs at least twice as much processing time, because there are two or more digital streams. As long as the processor has enough power, it doesn’t make a difference to our system.

What’s interesting is that you use the term “motherboard” in your radio. You’re basically applying a PC architecture to a radio. Is Vanu unique in this area? What makes Vanu unique is that it doesn’t have a DSP or FPGA in it. Those may be useful as waveform accelerators but so far Vanu doesn’t need them. Other companies center their software radio designs around FPGAs. In terms of using a computer-design style, most high-end radio designs such as those from Spectrum Signal Processing, Rockwell Collins, or Harris, are building boards that look like high-end computers but with accelerators on the back end of it. One difference between software radios and standard computers is that software radios require high-speed I/O which is something of a departure from standard PC architectures.

Your 2002 paper indicated the use of Linux. Do you still use Linux or do you see something better out there? Yes, Linux has many advantages for this area. We have shipped on several POSIX OSes and can port to other OSes if necessary.

What are the challenges with Linux? Linux is not a real-time OS but the benefits are so significant that it outweighs the drawbacks.

What are the major benefits of Linux? The OS is free and the software development tools are low-cost. On any new platform, Linux is usually the first OS available. Linux provides a huge advantage in diversity of hardware. There’s a wide-range of tools available. Some are free while some cost money.

You mention real-time being an issue. With the performance of today’s processors, do you still find this an issue? It is still an issue since the requirements of the waveforms are continuing to evolve and are much more demanding. By running only a few OS threads and doing resource allocation at user level, one can achieve real-time without having a real-time OS.

It appears you used buffering to overcome limitations in earlier systems. Do you still recommend buffering? There are latency requirements in some standards such as CDMA that prevents us from using a high level of buffering but it’s still used in some cases.

Do you see LabVIEW fitting into the application? LabVIEW would be useful for test systems. For Vanu, LabVIEW would be useful for VANU as a base station tester or a “gold standard” phone for testing base stations, or a device that pretends to be 20 to 40 phones. Base stations are hard to test because it’s manual and it involves a lot of physical hardware.

You use the TI DSP processor as an example of why waveform portability is important. National Instruments can cite the same example. In the early 1990’s NI built a DSP processor board. As the host processor of the PC moved up in speed, the DSP processor on a board became a bottleneck to the overall performance of the system.

Where are you going in the future? Vanu is a waveform company. We’re looking at the up and coming standards such as WiMax, 3G HSPA and others.

Best regards,
Hall T.

Friday, November 03, 2006

Vanu and Software Defined Radio Techniques

Vanu uses software defined radio for making cell phone base stations. They are the first to have an FCC approved software defined radio on the market which leverages software defined radio technology to handle changing wireless systems and communications between disparate devices.

I found their backgrounder on SDR pretty interesting. In their introduction whitepaper which is only five pages long they layout the basics of how SDR works.

Software Defined Radio goes back many years to the 1980s when cellular base stations were first developed. Through the 90’s the military entered the field and only recently did SDR become commercially viable.

A dual-mode cell phone is the simplest example of SDR which has two radios and uses software to switch between the two. The next level of SDR combines ASICs, DSPs, and FPGAs to achieve performance. Unfortunately, software for those systems must be rewritten to accommodate new generations of chips. The highest level of SDR is one in which software abstracts away from the hardware implementing the signal processing in the software and providing reuse of the software as new generations of chips are applied. In this article Vanu outlines the three levels.

John Chapin wrote a paper on the Vanu Radio Architecture which outlines the difference between Vanu’s approach and the traditional software defined radio approach. Vanu’s approach is to use DSPs and FPGAs to implement the waveforms, but in high-level code running on a POSIX operating system which allows for porting the code to new processors when they become available. Portability is their key advantage.

Next week we’ll meet with John Chapin to hear his perspective on the SDR challenges and issues.

Best regards,
Hall T.