With inspiration from biology, engineering smart appliances

May 17, 2001

GAINESVILLE, Fla. – Think of it as “Wild Kingdom” meets “Home Improvement.”

A University of Florida engineering professor is drawing on knowledge of nature’s creatures to design the working parts for next-generation appliances and entertainment devices.

John Harris, an associate professor of electrical and computer engineering and an expert in the electronic and computer processing of signals and speech, says evolution is better than engineers at designing speaking and listening devices. So he turns to scientific understanding about how animals and people function in their environment to design the guts for tomorrow’s more sensitive, more accommodating home. One example: a device that can determine a person’s location in a room that is patterned after the acute hearing system of the barn owl.

“The ears and the brain solve many extremely difficult problems,” Harris said. “They are ‘existence proofs’ that it’s possible to build such devices.”

In Harris’ home of the future, appliances and entertainment devices are attentive servants. The stereo automatically mutes itself when the phone rings. Dad doesn’t have to pull the chain on the ceiling fan — he just tells it to slow down. Mom moves from the dining room chair to the couch, and the TV automatically swivels to ensure she will continue to have the best viewing angle.

To function, such appliances need efficient and accurate sound recognition and networking capabilities. The TV, for example, needs to be able to recognize mom’s position in the room so that it can swivel toward her. To make that possible, Harris and graduate student Rahuldeva Ghosh drew on the remarkable ability of the barn owl to locate prey in complete darkness by sound alone.

Harris said the owl is known to process a sound signal from each ear on two different time delay lines, which helps pinpoint its target area. He and Ghosh created a computer program that mimics that adaptation. To filter out noise and echoes in a room, they also created an artificial neural network, a kind of simple computerized brain.

In a demonstration, Ghosh stood a few yards to the left of two small microphones and spoke a few words. Moments later, the computer read aloud his angle from the microphones, saying he was standing at 30 degrees on one side of their position. With an error of 8 degrees or less, the system is more accurate than any conventional system and the human ear itself, which is accurate only to within about 12 degrees, Ghosh said.

“You could use this in video-conferencing, where the camera would automatically track the speaker as he or she walked around,” Ghosh said.

Human hearing helped inspire another system in Harris’ lab, one that could allow appliances to network easily within a normal home. Harris and master’s student Paul Baker designed a speaker-microphone system that communicates with sounds not audible to the human ear — though within normal human hearing range. That may seem contradictory, but Baker said the system relies on the fact that the ear is more sensitive to some types of sound than others and misses sounds that are very brief.

Unlike existing wireless systems, the auditory communication cannot transmit through walls, preventing appliances from different houses or apartments from interfering with each other. And unlike the case with infrared, the transmitter does not have to point directly at the receiver. That gives it a unique effectiveness for household appliances to “talk.”

“The idea is we would use this in a local environment, so devices could communicate with each other easily, but it wouldn’t bother people or animals,” Baker said.

When people want to get an appliance to do something, the easiest route would be simply to talk to it, Harris said. Doctoral student Mark Skowronski designed a prototype trivia game that demonstrates real-time speech recognition and speech synthesis technology. The game, based on the popular “Who Wants to be a Millionaire” TV show, can recognize anyone’s voice with a hands-free microphone. Although the computer can understand only a few spoken words and numbers, the person and the computer interact by voice only.