Yuichiro Chino/Getty Images
The field of healthcare is increasingly attracting the efforts of the most prominent companies in artificial intelligence, with Microsoft being the latest example.
Last week, the company announced extensions to Fabric, the data analytics platform it unveiled in May, to enable Fabric to perform analysis on multiple types of healthcare data. Microsoft also announced new services in its Azure cloud computing service for, among other things, using large language models as medical assistants.
Also: Microsoft unveils Fabric analytics program, OneLake data lake to span cloud providers
“We want to build that unified, multimodal data foundation in Fabric One Lake, where you can unify all these different modalities of data so that you can then reason over that data, run AI models and so on,” said Umesh Rustogi, the general manager for Microsoft Cloud for Healthcare, in an interview with ZDNET.
The trend of multi-modality, which ZDNET explored in a feature article on AI this month, is increasingly important in healthcare, said Rustogi. “We have heard this from multiple customers, where they believe that if you combine multiple modalities of data that can unlock new insights, which is not possible by doing research just on one modality of data,” said Rustogi.
Umesh Rustogi, the general manager for Microsoft Cloud for Healthcare.
Microsoft
Examples of such combined modalities include “simple things like building cohorts of patients based on criteria from their imaging results and their clinical results, [which] is one very common desired use case which is not very easy to do today,” he said. Rustogi cited as exemplary of what some would like to do a 2020 study in the prestigious journal Nature. That article offers an overview of techniques for “data fusion” that can be “applied to combine medical imaging with EHR [electronic health records].”
Also: Generative AI will far surpass what ChatGPT can do. Here’s everything on how the tech advances
Another one of the new Fabric capabilities is a “de-identification service,” which uses machine learning forms of artificial intelligence to scrub clinical data to hide the patient identities in data such as doctors’ notes. “It has been a very hard problem for the industry to solve as to how do you take those unstructured clinical notes, and then de-identify them in such a way that it’s still meaningful for the research community,” said Rustogi.
Rustogi’s colleague, Hadas Bitran, head of Microsoft’s Health AI and Health and Life Sciences, discussed several new offerings for AI from the Azure web services business.
The Azure AI Health Insights offering consists of pre-built machine learning AI models. Three models are initially being offered in a preview stage:
Patient timeline, which “uses generative AI to extract key events from unstructured data, such as medications, diagnosis and procedures, and organizes them chronologically to give clinicians a more accurate view of a patient’s medical history to better inform care plans”; Clinical report simplification, which “uses generative AI to give clinicians the ability to take medical jargon and convert it into simple language while preserving the full essence of the clinical information so that it can be shared with others, including patients”; Radiology insights, which “provides quality checks through feedback on errors and inconsistencies. The model also identifies follow-up recommendations and clinical findings within clinical documentation with measurements (sizes) documented by the radiologist.”
Those three models are being added to several pre-built models that were already offered for clinical trials matching and for oncology phenotype-based models.
Also: 3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?
A new offering called Azure AI Health Bot uses large language model technology to retrieve answers to medical questions from sources including a healthcare organization’s own database, or the US National Institutes of Health and the US Food and Drug Administration.
“An idea here is that this service helps customers create specialized co-pilot experiences,” Bitran told ZDNET in the same interview with Rustogi.
“What’s also interesting about this is that you are able to do a cascading effect,” said Bitran. “So, use your own sources, and if there’s nothing in your own sources, you are able to also provide answers based on the credible sources, and then, if there’s nothing in the credible sources, then you can just fall back to a generic answer.”
Of course, there’s a lot of skepticism at present about using generative forms of AI, such as large language models, in sensitive practices such as healthcare. How does Microsoft think about such concerns?
Hadas Bitran, head of Microsoft’s Health AI and Health and Life Sciences.
Microsoft
“That’s a very good question, and a very relevant one,” said Bitran. “I definitely share a view that large language models need something in addition to them in order to provide good results.”
“The way we’re approaching it, is, for every model that we create, if we’re using large language models, they will always be accompanied by healthcare-specific safeguards,” said Bitran.
“One of the more interesting approaches [to safeguards] is using smaller models, and rule-based models, in a hybrid model with the LLM, to keep the LLM honest, if you will,” said Bitran.
Also: Amazon AWS rolls out HealthScribe to transcribe doctors’ conversations
For example, in the pre-built model for clinical report simplification, “we don’t just ask the language model to explain it to me; we’re also implementing a lot of pre-processing and post-processing logic that allows us to take the outcome of the simplification, measure it according to simplification performance metrics,” explained Bitran. “We then apply some cross-reference to it to see whether the results are actually a simplification of the source, or whether there are all sorts of fabrications or things that are missing.”
Bitran noted that the work on healthcare is done within what Microsoft has outlined as its “responsible AI framework,” which continues to be evaluated.
“That responsible AI framework is not just about privacy and security and accessibility and transparency, etc,” said Bitran. “It is also about correctness and accountability and about fairness.”
“Last but not least, our models are not intended to replace the physician,” said Bitran. “There is always a human individual; they’re intended to equip the clinicians with tools that would alleviate the burden, that would help them in their work.”