You've seen robots that perform operations: What about a robot that assists at the surgeon's side, passing instruments and supplies and calling up medical images as needed? Researchers say new "gesture recognition" technology could one day make the robotic scrub a reality.
The "vision-based hand gesture recognition technology," similar to voice recognition technology, uses Microsoft's Kinect camera, which senses 3-dimensional space, and specialized algorithms to recognize a surgeon's hand gestures and respond accordingly. For example, the robot could be set up to recognize a surgeon holding up 2 fingers to look like a pair of scissors and then pass the corresponding instrument to him.
With this technology, Juan Pablo Wachs, assistant professor of industrial engineering at Purdue University, has developed a prototype robotic scrub nurse and is working on developing algorithms that would allow the computer to predict surgeons' movements and needs during surgery by sensing the placement of instruments in the body, a vocabulary of defined gestures and the position of the surgeon's head.
"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," says Mr. Wachs. The ultimate goal of such a tool would be to improve OR efficiency, reduce surgical times and reduce the potential for infection by preventing the surgeon from having to leave the table to retrieve medical images.
"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years," acknowledges Mr. Wachs, "often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room. In that case, a robotic scrub nurse could be better."
In a research article published in Communications of the ACM, Mr. Wachs and colleagues set out a number of goals for their robotic system, including developing a simple, intuitive vocabulary of gestures, improving accuracy by helping the computer differentiate between intended and unintended gestures and reducing costs associated with the technology.
Photo: Purdue University/Mark Simons