I think it’s artificial intelligence.
AI stands poised to act as a force multiplier across every field of medicine, because rather than being useful against one kind of ailment – like antibiotics or radiation – AI can work alongside humans to make better decisions in the day-to-day, regardless of what the use case might be. In the same way that antimicrobial agents are the corollary and companion of germ theory, there’s every reason to believe that AI is what will enable us to apply our knowledge of “omics” (genomics, proteomics, metabolomics, etc) to human health. We’ve started to interact directly with the information contained in the genome, so it stands to reason that the next big leap will have to deal with information processing.
Multivariate analysis is by far the greatest strength of AI, because it allows the kind of contextual decision-making intelligence used in systems like the human mind, while also drawing from the eidetic memory of a hard disk. No parsing through the emotions is required, and there are no attentional omissions. AI doesn’t need sleep, and doesn’t get fatigued after focusing on one topic for too long. At the same time, AI has the benefit of massively parallel processing. The ability to handle huge volumes of data is of increasing value, and AI can drink from the firehose. With enough memory and processing power, a medical AI could hold a whole family tree’s worth of medical records in context, scour databases for pertinent diagnostic information, and call up banks of medical and social resources – all at the same time.
For the purposes of this discussion, I’m defining AI as a computerized system that can perform tasks usually requiring human intelligence, like speech and image recognition, translation between languages, or decision-making. But there are degrees of sophistication in such systems, and they can be under more or less computerized control depending on what humans can currently ask computers to do within polynomial time. We don’t currently trust AI enough to let it be fully autonomous; you’ll notice that even in planes with autopilot, there are always trained human aviators. But there are smart systems that have varying degrees of intelligence and automation, operating in real time – like Google’s self-driving car. Weighted decision-making is a technique that lets software inch closer to human-level situational awareness, even in silico. A system doesn’t have to be HAL to be AI. (Given how that worked out, it probably shouldn’t be).
State of the art
The health applications of software AI seem to stem mainly from its ability to remember and relate things, but also from its ability to personalize medicine, work fluently in natural language, and handle big data. Humans use context to determine the meaning of otherwise ambiguous words or events, and with natural language processing, so can AI. And these systems are in use today. A couple of worthwhile examples are the partnership between IBM’s Watson and Sloan-Kettering, and a medical AI called Praxis.
Watson has been in the news because of its recent performance at Jeopardy and chess. It’s well versed in game theory, but it’s also capable of learning and analyzing new information, and now it’s applying its talents as a diagnostician. Watson is also working with a group called Wellpoint, and Wellpoint’s Samuel Nessbaum has said that in tests, Watson got a 90% correct diagnosis rate for lung cancer, while doctors only got 50%. IBM, Sloan-Kettering and Wellpoint are trying to train Watson as a cloud-based diagnostic aid, available to any doctor or hospital willing to pay.
But even Watson, with its formidable talents, wasn’t built for medicine. To see a medical AI in the field, look to Praxis: a piece of medical records handling software, built around a concept processing AI. It uses a learning model that records a doctor’s vocal or typed input, and then classifies it into a net of semantic nodes, based on how closely the words or phrases are related to concepts the program has already seen. Praxis remembers those relationships, too, so as it gets more use, it gets smarter and faster.
If you’ve ever wondered whether there’s a way to do what 23andMe wanted to do with regards to fitting patient care to risk factor relationships found in the genome, by the way, there may be. 23andMe was very ambitious in terms of what they tried to claim, which is why they ended up in trouble with the FDA, but the basic premise is sound. Genetically personalized medicine can already account for single-nucleotide mutations that impair a drug’s function, as demonstrated in the design of different drugs for different stages in the progression of CML, a form of leukemia. The Geisinger hospital system in Pennsylvania, which treats about three million people, is participating with a company called Regeneron (PDF) in a huge longitudinal genomics study that will work with anonymized data on patient exomes from DNA samples they’ve volunteered. They intend to use the unaltered data to tailor health care to the patients in the study. As pioneers in the field, no doubt they’ll experience problems and setbacks, but the example Geisinger sets will be an important proof of concept.
The integrated, evolving AI
The important thing about force multipliers, ultimately, is that they reduce the amount of energy you have to spend to get a job done. This is where AI can really excel: offloading work from brains to silicon. Programmers have come a long way toward creating logically consistent software compatible with external control. What we need now is to iterate toward more and more independent, reliable computerized control systems which can fluently integrate environmental input, human direction, and its own software controls. The state of the art in AI is already pretty sexy, all things considered, but I want to prognosticate a little about how we could develop AI from here.
Imagine putting an AI to work on the Geisinger/Regeneron database. The system just begs for a control AI – leaving lab techs to manually scour DNA sequences is just cruel and unusual, even if they somehow speak Python. The database control AI would store the actual DNA sequences, of course, but it could also track the statistics of what DNA sequences tend to lead to what diseases, and even correlate that against living situations, environmental exposure and known disease clusters. It could produce visualizations of the data for the scientists and doctors who queried the database. Such a system would be a solid step toward an autonomous medical records management AI that would offload a huge amount of work from humans, freeing up desperately needed man-hours in the medical establishment.
Envision the Praxis software mentioned above, but imagine that it made friends with the controller AI that administered the Geisinger/Regeneron genetics database. It could listen to a patient’s narrative, append it to the patient’s chart, and suggest diagnoses to support a physician. The AI could then use the data to track geographical clusters of medical problems, or diagnose and study syndromes with behavioral symptoms. Such software could be profoundly empowering to women and minorities; it provides a confidential avenue for diagnosis that’s free of any medical paternalism, and independent of any one doctor’s biases. Further, it could parse out descriptions of symptoms, cross-correlate them with a patient’s genome and medical record, and compare that to the hospital database in order to report on any relationships it finds.
When it comes to hardware AI, there are a few ways this can go. Some systems seem beautifully tailored toward integrating AI. While I’m not a big fan of the Internet of Things, there’s a huge amount of untapped potential in terms of how your things can serve your health. Imagine a cross between Jarvis and BayMax. Suppose your grandma’s smart house was aware of her particular health issues – for example, that she’s at risk of having a stroke, which puts her at risk for a fall. A FitBit-style bracelet with an accelerometer and a six-axis gyro could collaborate with her house’s motion detection system to deploy her personal health care assistant and alert emergency services if it suspected she had fallen. But it could also closely monitor her heart rate and skin conductance, à la the Embrace, and append that timestamped data to her medical record. She could choose to allow her primary care doctor to release that anonymized data to a study designed to develop faster, more accurate diagnoses.
Medical imaging is another place where hardware and software can work together with medical professionals to make a system greater than the sum of its parts. We’re already working on combining better math with modern medical imaging, to get finer and more accurate interpretations of the images we get out of an MRI. The longitudinal collection of personal environmental data, combined with a system that combined patient outcomes with a series of medical images taken over time, could yield finer diagnostic accuracy and contribute to early detection.
But imagine you could integrate all of these notions: software controls, useful hardware, and imaging. It could supplement a pared-down hospital infrastructure that’s able to cater to patients who need more intensive care than what a well-stocked home diagnostics bot can provide. It really does sound like a system that could support BayMax, doesn’t it? At this level, the line between hardware and software, between product and producer, begins to blur. I think that’s where we’re heading. Toward a mostly public, much less formal, less appointment-based model of personally tailored health care, focused on prevention and administered by AI.
Fools rush in
I want to talk about the privacy and security implications of systems like these. The power held by an advanced AI with context-sensitive intelligence and access to your biometrics and genome just boggles the mind. Far beyond the purview of HIPAA compliance or iPhone fingerprint readers, what happens when someone steals your identity via your retinal scan? Such technology would create a whole new avenue for crime. And that’s assuming the only black hats are the outlaws. Perfect transparency may be the only way not to spiral out of control into a Black Mirror dystopia, where genetically targeted “approved content” is beamed straight to your optic nerve by the corporate state. Who controls the data?
Sufficiently advanced AI could call any number of its memories into context, weight them impartially, and do so in massive parallel . This could afford superhuman judgement and reaction times. It could also allow detection of relationships too far separated in context to catch a human’s attention. But an AI advanced enough to do these things could still become hidebound in the tyranny of algorithms, and the larger the system, the more points of vulnerability there are. What happens to the patients if a critical care AI is hacked, corrupted, or just wrong? What do we do if the AI we put in control is quite positive it’s smarter than we are? What if it’s right? How much control do we want to give away?
As AI research expands and refines our understanding of intelligence and machine learning, we’ll see more and more applications cropping up. Some of the branches of AI will be useful to the military-industrial complex, no doubt. Because the stakes of integrating artificial intelligence and decision-making capabilities into medicine are so high, the systems we develop will need to be both robust and accurate. This isn’t a revolution that’ll happen in a year or two.
Long-term, however, the integration of AI into various facets of medicine could produce a revolution not seen since the discovery of antibiotics or the discovery of germ theory. The ability to tap the sum total of human knowledge in a particular field and to then apply that to an individual’s specific genome or particular situation could yield dramatically better outcomes than those we see today.
We’re covering future medical technology all this week; read the rest of our Medical Tech Week stories for more. And be sure to check out our ExtremeTech Explains series for more in-depth coverage of today’s hottest tech topics.