UF's virtual reality "patient" teaches bedside manners to medical students

GAINESVILLE, Fla. — ”Tell me where it hurts” is the classic opening to many a doctor’s examination, and University of Florida researchers have given it a digital twist. The scientists have created a virtual reality “patient” that can help medical students master the subtle art of the patient-doctor interview.

“DIANA,” which stands for DIgital ANimated Avatar, is a life-sized image of a 19-year-old Caucasian female with a passing resemblance to video game hero Lara Croft. Her image, complete with simulated doctor’s office in the background, is projected onto a wall. Through their interviews with her, medical students can practice not only the right questions to ask to come to an accurate diagnosis but also the less straightforward aspects of human interaction such as gestures and eye contact.

“We want to focus on communication,” said Benjamin Lok, an assistant professor in UF’s computer and information science and engineering department and the lead researcher on the project. “Part of (the interview training) is to get the right answer, but part of it is to learn communication skills.” A member of Lok’s team, UF doctoral student Kyle Johnson, will present the group’s research in mid-March at the Institute of Electrical and Electronics Engineers’ Virtual Reality 2005 Conference, the largest virtual reality conference in the world.

For Lok and his team, the question was whether naturally interacting with a simulated patient, with no keyboard or mouse in sight, would better help students to develop their interviewing skills.

“We wondered, would people respond to virtual people?” Lok said. “Does it help to make the interaction between a virtual patient and the doctor as realistic as possible?”

During testing of the system, the scientists found that doctors treated DIANA differently than they would a regular computer program. In fact, their behavior was similar to how people interact with sitcom characters or movie celebrities, Lok said.

“We believe there is something different about walking into a room, seeing (the patient) life-sized and interacting with her naturally,” he said. “We see somebody that looks like a person and we start attributing (humanity) to them. People are willing to buy into it.”

For their interviews with DIANA, the students wear headsets to communicate with her, take notes on a digital notepad and wear gloves with built-in LED pointers so the system can track gestures. The conversations are highly structured, with the system trained to respond to keywords and phrases. However, that structure is necessary in a medical interview, Lok said.

“If you’re a medical educator and you need to educate 100 students on how to do something, you’re going to give them a very structured path to follow: Ask a set of questions in this order, the location of the pain, its duration,” he said.

Currently, medical students can practice interviewing skills with “standardized patients,” live actors who are given a script to follow for the interview. However, training the actors can be expensive, and it can be difficult to find sufficiently diverse populations of actors, a factor that can make a subtle difference in the interview process, Lok said. The system, which costs less than $10,000, would help students train for the standardized patient interviews, making those sessions more effective, Lok said.

Seven medical students tested DIANA in August, and another 20 interviewed her in December. After each test, the students rated the realism and usefulness of the interviews on a one-to-10 scale. By December, DIANA’s average rating of 7.2 was nearly identical to the 7.4 average for the live actors.

Though those results are promising, DIANA isn’t ready to replace live actors yet, Lok said. She can look up when she is spoken to, look down during pauses, reach out to receive a handshake. But there are many other physical cues in human conversations that can provide information to a doctor and also reassure a patient that the doctor is paying attention, he added.

“There are so many things that you and I do when we talk — I can tell whether your eyes are focusing on me, whether you’re listening, hand gestures, facial gestures, body posture. These are things that the computer can’t do – but we’re working on that,” he said.

However, virtual reality patients eventually will allow students to try a nearly limitless variety of interview scenarios – different medical conditions as well as different ages, races and genders. And a virtual patient also can serve as a kind of quality control in medical education, Lok said.

“If you have 50 students, how do you guarantee that they all experience the same thing?” he said. “And how do you know that somebody who graduated from the University of Florida experienced the same thing as somebody from Harvard Medical School? That’s where we’d like to go, long term.”

Integrating computer science with human communication is a challenge, said Larry Hodges, professor and chairman of the department of computer science at the University of North Carolina at Charlotte. Lok was a postdoctoral researcher in Hodges’ Virtual Environments Group before arriving at UF. “This is interdisciplinary research by its nature. You can’t do this with just computer scientists or medical professionals. Bringing these things together is something that Ben does very well.”

The project includes UF doctoral students Johnson and Andrew Raij and UF undergraduate Robert Dickerson, as well as Dr. D. Scott Lind of UF’s College of Medicine and Dr. Amy Stevens of the Malcom Randall Veterans Affairs Medical Center.

While computer-based programs for medical students to practice their diagnostic ability exist, they lack a certain “human” element, Lok said. “There are systems where you talk to a computer, you watch graphics on the monitor, you click on different things,” he said. “We look at our system as going down a different road. We’re trying to look at interaction itself.”