Skip to main content

6 steps to using low-code tools to achieve better patient care

 

a man and a woman sitting at a table

GIF showing benefits of low-code/no-code toolsWe’re going to be sharing a story every week for the 12 weeks of summer, showing you how healthcare organisations are using technology to transform patient outcomes and increase productivity. For the fourth blog in our series, Nas Taibi, Solutions Architect, details how, thanks to low-code/no-code services, introducing AI into medical imaging is no longer limited to coding experts.

The shift towards values-based care has seen healthcare facilities seeking low-code and no-code innovations that accelerate operational outcomes and create a financially sustainable care system.

One recent idea that’s garnered attention in the enterprise imaging world is AI augmentation – using machine learning to process, analyse, and interpret medical images. Embedding the technology into enterprise imaging systems has seen improvements to clinicians’ decision-making process and a lighter burden on reporting practitioners’. Using low-code or no-code technology, professionals find they can work quicker and better than before.

But it has another far-reaching benefit: improving people’s health by accurately detecting diseases early.

Picture the scene: machine learning and ultrasound

Imagine a healthcare facility. It’s seeking to use machine learning to automate and improve the prediction accuracy of a foetus’ gestational stage. Traditionally, sonographers have manually measured the biparietal diameter and head circumference using callipers.

The machine learning model analyses legacy ultrasound medical images, with manual measurements taken. It will then closely match the accuracy of the original sonographer’s findings. With AI at their side, the facility can capture this data, then embed it in the ultrasound images, which can be used during training as a reference data point.

Example of traditional way to measure ultrasound images without AI

Picture the scene: preparing and cleaning the data

Now, imagine an engineering team is prototyping a solution to gather, clean, and pre-process those images. The collected sample data will be used to train and test the model.

The medical imaging system stores the files in Azure’s Binary Large Object storage. Adding each file triggers the Azure Logic App workflow. The message is pulled through and the blob URL extracted, before it grabs the JPEG and JSON from the DICOM image. Next, the system performs an optical character recognition on the image – essentially letting it ‘see’ the photo and pull out metadata.

And, best of all, it’s all performed automatically.

Example of process to de-identify medical images

The low-code technology at play

AI tools and framework adoption is growing in the healthcare sector.  Fully managed cloud-based machine learning services can be used to train, deploy, and manage models at scale. Then there’s low code/no code tools.

You don’t need to be a technical expert to use them. There’s no need to learn code.

All you need is access to Azure Cognitive Services, which provide pre-built machine learning models. They’re designed to help you “build intelligent applications without having direct AI or data science skills or knowledge.”

Helping streamline the development process is the low code Azure Machine Learning Studio. It lets you deploy pre-built machine-learning algorithms, then connect datasets that integrate with custom apps.

Used together, these Microsoft services make it easy to transform the workplace without coding skills. You can, instead, focus on delivering improved ROI, a superior experience for employees, and even higher quality care.

A look at what low code machine learning may offer AI medical imaging in the future

Step 1 – Finding the measurements

Healthcare facilities across the country face a similar problem: some ultrasound images are essentially screenshots of screenshots.

While modern machines are able to embed those all-important measurements into the image, these older images include those all-important measurements taken by the sonographer during the scan. The first, step, is obtaining these images.

Step 2 – Pixel extraction and conversion

The next step sees you extract the image’s pixels, then use open-source tools to convert the original DICOM file to JPEG.

Armed with this JPEG, it’s time to run the image through the Optical Character Recognition. Since this is achieved via Microsoft’s smart Cognitive Services, it’s easy to perform.

Watch out though, as this process often turns up personal identifiable information, such as names. Worse, it’s displayed as a banner in the pixel data, so, it becomes imperative to identify and mask this.

Step 3 – Data extraction

Time to unleash those Cognitive Services again. At this stage, you can use pre-built services to easily extract the biparietal diameter and head circumference measurements from your JPEG. These measurements let you calculate the gestational age.

Step 4 –De-identifying the information

A patient’s personal data is often stored in the image’s metadata, as well as the banner. And, in the interests of patient privacy, you’ll need to thoroughly de-identify these images before sending them on for further processing.

For metadata de-identification, the reference value in the Tags is used to check the resulting OCR JSON payload. This information must be masked by identifying the coordinates, width, and height of the bounding box.

Step 5 – Automatically hide information

Now, deploy Cognitive Services once more to detect the bounding box around any personal information. Details like a name or Medical Record Number are masked by a rectangle automatically drawn around  sensitive information.

Step 6 – Ensuring interoperability

Finally, this data needs to be available for interoperability. That’s where you’ll want to use  the Azure FHIR API services. This lets the data to flow to the rest of the downstream analytics systems.

Low code Citizen Developers changing healthcare

By taking advantage of low code/no code services, the healthcare sector finds itself in a better position to innovate. No need for a huge capital investment to hire subject matter experts long before they’re required.

These easy-to-deploy services are creating a new breed of devs, dubbed Citizen Developers; professionals who can now quickly create and automate a business workflow or a cumbersome form-based routine without needing complex coding skills.

By leveraging the power of AI and Azure cloud computing, cleaning, pre-processing, and de-identifying ultrasounds becomes quicker than ever before – and easier, too. Now feed the results into a machine learning system, helping healthcare professionals make quicker diagnoses and improve the all-round experience for patients and employees.

Find out more

Discover what’s possible with Azure Cognitive Services

About the author

Nas Taibi, Solutions Architect at MicrosoftNas Taibi works as a Solutions Architect in Microsoft. He has over 10 years’ experience working in the Healthcare industry, developing and architecting solutions for Medical Imaging companies (Radiology/Cardiology) and promoting interoperability between healthcare providers using FHIR and HL7v2. Making a difference in the overall patient journey is also a personal goal, so in his spare time, Nas develops healthcare apps and helps other entrepreneurs get started in the healthcare industry.