InnerEye: Visual Recognition in the Hospital

Published

The neurosurgeon hovers over the patient, preparing to excise a life-threatening brain tumor. In this delicate operation, there is no margin for error: the tumor needs to be cut out with minimal damage to the surrounding healthy tissue. By using simple hand gestures, the surgeon signals a computer to display high-resolution scans of the patient’s brain, showing the physician where to place her scalpel, detailing the boundaries between diseased and healthy tissue. No longer must the neurosurgeon stop to refer to the patient’s image data during the operation, removing her gloves and potentially compromising the sterile surgical field. The upshot for the patient: reduced time under anesthesia and a lower risk of introduced infection.

Interactive Segmentation of CT and MR Scans

Science fiction? Far from it. This scenario and others like it are on the verge of realization thanks to ground-breaking InnerEye project being conducted by Microsoft Research and a host of collaborators, including Johns Hopkins Medical Institute, The University of Oxford, Cornell Medical School, Massachusetts General Hospital, the University of Washington, Kings College London, and Cambridge University Hospitals

Microsoft Research Blog

Introducing Aurora: The first large-scale foundation model of the atmosphere

Aurora, a new AI foundation model from Microsoft Research, can transform our ability to predict and mitigate extreme weather events and the effects of climate change by enabling faster and more accurate weather forecasts than ever before.

The analysis of medical images is essential in modern medicine. As images have achieved higher and higher resolutions, the increasing amount of patient data has presented new challenges and opportunities, from diagnosis to therapy. The InnerEye research shows how a single, underlying image-recognition algorithm can enable a multitude of clinical applications, such as semantic image navigation, multimodal image registration, quality control, content-based image search, and natural user interfaces for surgery.

InnerEye takes advantage of advances in computer-human interactions that have put computers on a path to work for us and collaborate with us. The development of a natural user interface (NUI) enables computers to adapt to you and be more integrated into your environment via speech, touch, and gesture. As NUI systems become more powerful and are imbued with more situational awareness, they can provide beneficial, real-time interactions that will be seamless and naturally suited to your context—in short, systems will understand where you are and what you’re doing.

At this year’s TechFest—the annual event that showcases the latest work from Microsoft Research’s labs around the world—InnerEye is one of several projects that show where Microsoft is headed with NUI technologies, and how “futuristic” computing experiences are quickly becoming a reality. Building on the success of Kinect—a prime example of NUI technology reaching consumer scale—Microsoft Research continues to explore technologies that will enable the coming shift in how humans will communicate with machines, and vice versa. The possibilities are seemingly endless in how we approach the integration of computing into our lives and can enable a new era of creativity, social interaction, and technological scenarios.

Antonio Criminisi, Researcher, Microsoft Research and Kristin Tolle, Director, Natural User Interface Team, Microsoft Research Connections division of Microsoft Research

Learn More