An Intel survey of healthcare decision-makers suggests great excitement for more widespread artificial intelligence and machine learning adoption, but also some skepticism.
In a survey of hundreds of healthcare decision-makers, Intel found that the percentage of respondents whose company is currently – or will be – using artificial intelligence nearly doubled after the onset of COVID-19.
Among the predicted use cases for AI: early intervention analytics, clinical decision support and specialist collaboration. “Artificial intelligence in health and life sciences has greatly accelerated,” said Stacey Shulman, vice president of the Internet of Things Group at Intel, in a blog post accompanying the findings.
“From helping clinicians develop personalized protocols to streamlining clinical workloads or unlocking insights in genomics, infusing AI into these industries may be much closer than many initially thought,” she said.
Why it Matters
Intel conducted an online survey of 200 senior decision-makers at healthcare organizations in April 2018, and then 230 in July 2020.
In 2018, 37% of respondents said their company had deployed, or was planning to deploy, AI. Forty-five percent said their company did before the pandemic in 2020.
That number swelled to 84% after COVID-19 began to sweep the country.
Survey results also suggested that confidence in AI is growing, with two-thirds of respondents saying they would trust AI to process medical records within two years and 62% saying they would trust AI to analyze diagnostics and screening.
Still, experts expressed some reservations. Twenty percent said cost would be the most difficult challenge to overcome, while 17% cited lack of clinician trust in AI decisions, and 16% said that AI tech was still in its nascent stage.
Respondents also feared that AI will be poorly implemented, that it will be overhyped and that it will be responsible for a fatal error.
The Larger Trend
Although security considerations weren’t mentioned in the Intel survey responses, other experts have cautioned that AI and machine learning could be a double-edged sword.
Some kinds of threats leveraged against healthcare industries rely on AI and ML to perform complex, and harmful, actions in a new environment.
There’s also the issue of bias: AI and ML aren’t immune from the prejudices of their creators. Systems that aren’t trained on representative sets of data points are unlikely to be accurate.