While connoisseurs are mostly in agreement about the benefits AI will provide medical practitioners — such as interpreting illnesses very early on and speeding up the overall healthcare experience — some doctors and scholastics are wary we could be headed in the direction of data-driven medical practices too wanton.
One fear among academics is that people are expecting too much of AI, thinking it can form the kind of general intelligence that humans possess to answer a broad range of tasks.
“All the successful AI applications to date are incredibly booming, but in a very narrow range of application,” said the University of Edinburgh’s Bundy.
Concerting to Bundy, these expectations could have potentially dire consequences for an enterprise like healthcare. “A medical diagnosis app, which is excellent at heart hards, might diagnose a cancer patient with some rare sort of heart problem, with potentially fatal results,” he said.
Principled last week, a report by health-focused publication Stat cited internal IBM certificates showing that the tech giant’s Watson supercomputer had made multiple “unsafe and specious” cancer treatment recommendations. According to the article, the software was trained alone to deal with a small number of cases and hypothetical scenarios to a certain extent than actual patient data.
“We created Watson Health three years ago to broach AI to some of the biggest challenges in healthcare, and we are pleased with the progress we’re reaching,” an IBM spokesperson told CNBC.
“Our oncology and genomics offerings are used by 230 dispensaries around the world and have supported care for more than 84,000 constants, which is almost double the number of patients as of the end of 2017.”
The spokesperson added: “At the unaltered time, we have learned and improved Watson Health based on perpetual feedback from clients, new scientific evidence and new cancers and treatment surrogates. This includes 11 software releases for even better functionality during the days beyond recall year, including national guidelines for cancers ranging from colon to liver cancer.”
Another consideration is that the volume of data gobbled up by computers and shared about — as personally as the data-driven algorithms that automate applications by using that evidence — could hold ethical implications over the privacy of patients.
The appearance of big data, now a multi-billion dollar industry covering everything from exchange to hospitality, means that the amount of personal information that can be serene by machines has ballooned to an unfathomable size.
The phenomenon is being touted as a breakthrough for the mapping out of different diseases, predicting the likelihood of someone getting seriously ill and examining treatment up ahead of time. But concerns over how much data is stored and where it is being shared are sustaining problematic.
Take DeepMind, for example. The Google-owned AI firm signed a conduct oneself treat with the U.K.’s National Health Service in 2015, giving it access to the healthfulness data of 1.6 million British patients. The scheme meant that patients handed their figures over to the company in order to improve its programs’ ability to detect diseases. It led to the creation of an app called Streams, aimed at monitoring patients with kidney blights and alerting clinicians when a patient’s condition deteriorates.
But last year, U.K. seclusion watchdog the Information Commissioner’s Office ruled that the contract between the NHS and DeepMind “forsook to comply with data protection law.” The ICO said that London’s Princess Free Hospital, which worked with DeepMind as part of the understanding, was not transparent about the way patients’ data would be used.