InnerEye header depicting eye and neural net

Project InnerEye Open-Source Software for Medical Imaging AI

Get started with Project InnerEye OSS tools

Project InnerEye OSS tools can be used for different use cases, particularly to increase productivity for medical imaging researchers, as described here. These OSS components have been validated for analyzing CT scans for radiotherapy planning workflows, with a typical setup as shown in the diagram below. There are three InnerEye OSS tools that can be used as part of this typical medical imaging workflow:

InnerEye OSS component typical architecture
Project InnerEye OSS component typical architecture

Medical imaging machine learning model training

InnerEye-DeepLearning toolkit

This is a deep learning toolbox (opens in new tab) to make it easier to train models on medical images, or more generally, 3D images. It uses a configuration-based approach for building your own image classification, segmentation, or sequential models and integrates seamlessly with cloud computing in Azure. On the modelling side, this toolbox supports:

On the user side, this toolbox takes advantage of Azure Machine Learning Services (AzureML) (opens in new tab) to dynamically scale out training onto GPU clusters, and provides traceability and transparency for developing ML models. The toolkit also offers advanced capabilities such as cross-validation, hyperparameter tuning using Hyperdrive (opens in new tab), ensemble models, easy creation of new models via a configuration-based approach, and inheritance from an existing architecture. You can get started using the InnerEye DeepLearning toolkit on your desktop machine or using Microsoft Azure by following the detailed instructions here – InnerEye-DeepLearning/README.md at main · microsoft/InnerEye-DeepLearning (github.com) (opens in new tab).


Deployment components

Gateway service

The InnerEye-Gateway comprises Windows services that act as a DICOM Service Class Provider. After an Association Request to C-STORE, a set of DICOM image files will be anonymized by removing a user-defined set of identifiers, and passed to a web service running InnerEye-Inference (opens in new tab). Inference will then pass them execute an ML model trained using InnerEye-DeepLearning. The result is downloaded, de-anonymized and passed to a configurable DICOM destination. All DICOM image files, and the model output, are automatically deleted immediately after use. The gateway should be installed on a machine within your DICOM network that is able to access a running instance of InnerEye-Inference (opens in new tab). You can get started using the InnerEye Gateway by following the detailed instructions here – InnerEye-Gateway/README.md at main · microsoft/InnerEye-Gateway (github.com) (opens in new tab)

Inference service

InnerEye-Inference is a Microsoft Azure AppService web application in Python that runs machine learning model inference on medical imaging models trained with the InnerEye-DeepLearning toolkit (opens in new tab). You can also integrate this with DICOM using the InnerEye-Gateway (opens in new tab) You can get started using the InnerEye Inference service by following the detailed instructions here –InnerEye-Inference/README.md at main · microsoft/InnerEye-Inference (github.com) (opens in new tab)