Purdue University News (01/10/13) Emil Venere
Purdue University researchers are developing a system that recognizes hand gestures as commands to tell a computer to browse and display medical images of a patient during surgery. The system uses depth-sensing cameras and algorithms to recognize hand gestures as commands to manipulate MRI images on a large display. The system recognizes 10 gestures, including rotate clockwise and counterclockwise, browse left and right, browse up and down, increase and decrease brightness, and zoom in and out. The researchers note the system’s accuracy relies on the use of contextual information in the operating room, which is achieved through cameras that observe the surgeon’s torso and head to determine what the surgeon wants to do. “Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images,” says Purdue professor Pablo Wachs. The gesture-recognition system uses a Microsoft Kinect camera that can sense 3D space. The researchers found that integrating context enables the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 percent to 2.3 percent. The system also has an average accuracy of 93 percent in translating gestures into specific commands.