• Blog Stats

    • 61,643 hits
  • Categories

  • Archives

  • Advertisements

Big Medical Data

MIT News (01/25/13) Larry Hardesty

Last year the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) launched bigdata@csail, a big data initiative that includes several projects designed to make medical data more accessible to physicians and patients.  For example, researchers in the Clinical Decision Making Group are developing methods for bringing artificial intelligence to the medical community.  The group participates in a large initiative to create a database system that would link genomic and clinical data so that doctors can more easily test hypotheses about connections between genetic variations and certain diseases.  The group recently presented a new approach to the problem of word-sense disambiguation, or inferring from context which of a word’s several meanings is intended.  Meanwhile, researchers in CSAIL’s Data-Driven Medicine Group are investigating techniques for detecting and predicting hospital-borne infections.  In addition, researchers in the New Media Medicine Group are developing tools to enable members of online discussion boards to gather and organize medically relevant data about their own experiences with particular diseases and courses of treatments.


Surgeons May Use Hand Gestures to Manipulate MRI Images in OR

Purdue University News (01/10/13) Emil Venere

Purdue University researchers are developing a system that recognizes hand gestures as commands to tell a computer to browse and display medical images of a patient during surgery.  The system uses depth-sensing cameras and algorithms to recognize hand gestures as commands to manipulate MRI images on a large display.  The system recognizes 10 gestures, including rotate clockwise and counterclockwise, browse left and right, browse up and down, increase and decrease brightness, and zoom in and out.  The researchers note the system’s accuracy relies on the use of contextual information in the operating room, which is achieved through cameras that observe the surgeon’s torso and head to determine what the surgeon wants to do.  “Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images,” says Purdue professor Pablo Wachs.  The gesture-recognition system uses a Microsoft Kinect camera that can sense 3D space.  The researchers found that integrating context enables the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 percent to 2.3 percent.  The system also has an average accuracy of 93 percent in translating gestures into specific commands.