ED: It is our opinion that one must keep a balanced opinion of what is going on in Digital Health, and not blindly accept everything that is digital, that it is great, it works, etc.  Here’s a great perspective on what is going on from Mr Timmerman;  he says exactly what my cardiologist said to me, when I asked him why he did not use the latest digital sensors, etc.,  and it sure rings true:

…most doctors don’t want to do anything new unless it helps them make more money…



Follow @ldtimmerman

Tech entrepreneurs have been raving for a while now about big data changing the world, and it’s mostly bullshit. Venture capitalist Brad Feld made this point, more or less, when he was being purposely provocative at an Xconomy event last fall.

As a biotech journalist, I wanted to cheer “Preach On, Brother Feld!” Doctors are still kicking and screaming about being forced to use electronic medical records, more than 30 years into the PC revolution. We’ve had to pay them all off as a country to get them to quit using old-fashioned pen and paper. Hospitals have dozens of proprietary records systems that don’t talk to each other. While “big data” analysis is sweeping through and changing the way people forecast the weather, predict traffic patterns, and trade stocks, it’s always been hard for me to see how big data will crack into an industry as hidebound as healthcare. How many times have you written down your Social Security number on a patient intake form, when they could have just had it on file?

How are we suddenly going to wake up in an environment where we capture, store, retrieve, and analyze big volumes of medical data to improve wellness and patient care? Will we all end up wearing glucose monitors that stream real-time data to cloud-based supercomputers that use mathematical, predictive models to warn us when we’re headed toward diabetes? The future of personalized medicine sounds enticing, and there are already some great examples of a few drugs and diagnostics that do make healthcare more effective, and precisely targeted to the patient. But no one should underestimate the power of the status quo, and how allergic most healthcare players are, to any of this disruptive change.

But after a recent visit with Colin Hill at Cambridge, MA-based GNS Healthcare, I’m starting to think that big data is now worth at least keeping an eye on in healthcare. The revolution, when it comes, will be driven by financial necessity.

Colin Hill, CEO of GNS Healthcare

GNS has been around for a long-time already, and Hill is a smart guy and regular speaker on the life sciences event circuit. GNS is one of the early movers in this world of what you could call “big data analytics for healthcare” or “data-driven decision-making for healthcare.” The company has had its ups and downs, and probably has been a little too far ahead of the times for its own good. But Hill has been a careful student of the healthcare markets, he’s been patient, and while he sees that big data hasn’t yet arrived in healthcare, he believes it’s only a matter of time.

When I spoke with him a week ago, he had just delivered a sobering message to biotech and pharma CEOs at a private gathering in Boston.

“You guys are not prepared for what you’re about to run into.’” Hill says he told the group of executives. “A lot of CEOs talk a good game about moving ‘beyond the pill,’ but the level of chops and data assets and analytic tools needed to do this are beyond what most pharma companies have. If they don’t get ahead of it, payers will do it for them.”

Big data could mean a lot of things in healthcare, but GNS is now starting to find what looks like a truly useful niche.

Right now, most prescribing of drugs is based on trial and error. A patient comes in, sees a doctor, gets diagnosed with something like multiple sclerosis. The doctor, being the expert, is familiar with the various treatments that are approved by the FDA, and the body of medical evidence that each product has built up in controlled clinical trials. Based on that knowledge, and some information about the patient and the doctor’s own intuition or biases, he or she prescribes a medication and hopes for the best. If it doesn’t work, the doctor and patient move on to another drug or device.

Payers, whether they are at Medicare or UnitedHealth or Aetna or elsewhere see enormous waste in this system.

As Hill puts it, the $2.7 trillion a year U.S. healthcare industry suffers from a massive ‘Wanamaker’ problem. Wanamaker, students of history know, was a 19th and early 20th century retailer who famously observed that half of the money he spent on advertising in traditional media outlets was wasted—the trouble was, he didn’t know which half.

By the end of the 20th century, Google came along and made it possible for advertisers to eliminate much of that waste, and aim their ads precisely where they could be most effective. It was a win for advertisers and a horrible loss for traditional media companies (don’t get me started.)

The “Wanamaker” problem in healthcare is equally big and ripe for disruption. Cancer drugs typically only work for about 25 percent of the patients who get them. Asthma drugs only work for about 60 percent. Rheumatoid arthritis meds work maybe half the time. A lot of money gets wasted on treatments that don’t work for an individual patient. As any student can tell you, biology is incredibly complicated. We don’t know what causes lots of diseases, rheumatoid arthritis included. We’ve certainly never had the ability to predict, with a high degree of mathematical confidence, which drug is most likely to work for a given patient.

That’s starting to change, partly because of what GNS is doing. GNS has gone around and struck licensing deals to stitch together a database with health data from 100 million American lives. It’s chock full of info from electronic medical records, registries, and claims datasets. These datasets contain patient characteristic information on things like age, gender, ethnicity, diagnosis, smoking or non-smoking-status. There’s also information on treatment history. Increasingly, data can be layered on top to include imaging scan results, genetic test results, clinical outcomes, and—here’s a really important part—financial outcomes for the patient.

Over the past year, the data crunchers at GNS have been working on creating predictive models that aim to tell whether a given drug is likely to work for an individual. Those models will differ based on whether you’re trying to gain insight into multiple sclerosis, rheumatoid arthritis, or some other condition.

It’s no accident that GNS is focusing on multiple sclerosis and rheumatoid arthritis. These are chronic conditions. There are many different competing treatment options for patients, which look more or less interchangeable based on their clinical trial results. The products, even though they don’t work for all, generate revenue by the billions. There’s a huge amount of waste, and a lot of money to be saved by reducing waste.

That kind of market is where GNS thinks predictive algorithms can thrive. GNS can crunch through numbers on actual patient experience of hundreds of thousands of patients with MS. It’s conceivable that if this data is properly used, it could tell you, hypothetically, that 55-to-60 year-old Asian-American female non-smokers who have relapsed after first-line treatment with Biogen Idec’s interferon-beta (Avonex) should go next to treatment with Biogen’s natalizumab (Tysabri) or Sanofi/Genzyme’s alemtuzumab (Lemtrada) instead of one of the new oral pills. Or maybe the software will say that Tysabri has a 65 percent chance of working for that patient, compared with a 50 percent chance for a new oral drug, which might be considerably different than what clinical trials might have suggested for that small subpopulation of patient. The options might be quite different, at least in theory, for, say, a 45-year-old Hispanic male with certain high levels of c-reactive protein markers in the blood, for example.

These are the kinds of questions that motivated GNS and a few other healthcare companies to start Orion Bionetworks last month. Orion got started with a $5.4 million financing from Janssen Research & Development (a unit of Johnson & Johnson) to bring together patient data from Accelerated Cure Project for Multiple Sclerosis, the Institute for Neurosciences at Brigham and Women’s Hospital, and PatientsLikeMe to build some predictive models of what works, and what doesn’t, for individual MS patients.

Within a few years, Hill says, this sort of question could be reduced to an app on a physician’s smartphone. All the heavy data and math would occur in the background, and presto, your doctor says that Teva’s glatiramer acetate (Copaxone) has the highest likelihood of success for you, based on all your patient characteristics, medical history, and genomic profile.

What’s driving this? It won’t happen just because consumers want another cool app on their iPhone. It will have to come from the payers, who are under intense pressure to curb runaway healthcare spending. They are the ones hearing employers screaming about insurance premiums spiraling out of control. They are motivated, Hill says, and they are getting much more savvy about data than they were a couple years ago.

“They are focused,” Hill says. “They’re saying to themselves, ‘I have 20,000 MS patients in my covered population, and the costs keep going up, and we don’t know what treatments are most effective. Instead of spending $25,000 a year on these patients, can we limit that to $22,000 or $20,000 a year? And can we do that while offering equivalent or better outcomes for patients?”

That’s the ticket. Equal or better clinical outcomes (in this case, measured by multiple-sclerosis flare-ups or disability), and a better financial outcome for the patient and insurance company.

Pharma companies, no surprise, haven’t yet gotten fully on board. Many have long paid lip service to “personalized medicine” or getting the right drug to the right patient at the right time, but the fact is they make more money under today’s system, when there’s a lot of trial-and-error prescribing. Still, GNS has a growing and diversified list of collaborators, which includes Pfizer, Aetna, the Dana-Farber Cancer Institute, and the CHDI Foundation.

Pharma companies have much to lose if big data analytics were to truly come of age sometime soon, since doctors could start to curb all their wasteful prescribing habits. Then again, pharma also might be able to turn this technology to its advantage. If you’re a multiple sclerosis drugmaker and you have this kind of fine-grained, predictive data on your drug’s efficacy profile, you now have some convincing evidence to make a sale, and you can save by cutting back on efforts to over-treat certain populations. You could target your ad budget to the best demographic possible, and see those ads convert into sales. You might be able to anticipate competitive threats to your market share, and design a chemical modification to a future drug that’s more effective for a certain segment of patients. You might be able to weed out likely non-responders from your clinical trials, improving the success rate of your pipeline candidates.

This definitely got my imagination going, but part of me says this is all a lot of wishful thinking. Anytime you’re talking about health data, privacy is always the big barrier. Most doctors are luddites who don’t like to switch to IT-based solutions, even when it helps them operate more efficiently. In today’s cost-constrained environment, most of them don’t want to do anything new unless it helps them make more money. How are they going to react when many will feel like their greatest value—their insights based on experience—is being undermined by software algorithms that can make recommendations based on far more data than a single human brain can process?

My own best guess is that the technologists will work out whatever flaws there are in the datasets (the garbage in/garbage out problem), assuage the privacy concerns, and find ways to integrate really cool stuff like genome sequences. Those things shouldn’t take too long.

Consumers, once they find out there’s a better way to predict their outcomes in health, will probably start demanding it, especially as they are on the hook for increasingly steep co-pays. The harder, longer-term work will be in getting pharma companies and the doctors to figure out how they can still prosper in a future when the algorithms have real value, and tells us things that the human experts never could before.

Attached to my office bulletin board is a quote from Upton Sinclair: “It is difficult to get a man to understand something when his salary depends on not understanding it.” Right now, it’s not in the financial interest of physicians, hospitals, and pharmaceutical companies, to understand the potential of big data in healthcare. To them, it’s bullshit. But whether it’s Colin Hill and his team, or someone else, healthcare is eventually going down this path of using predictive health models, based on machine learning. The algorithms looking over datasets will quickly detect things, like the next big drug safety problem, long before it becomes a catastrophe. The data will tell us which treatment is most likely to work for you. Our healthcare system will be in much better shape, and hopefully no longer gobbling up 18 percent of our gross domestic product, when it finally goes mainstream.

Luke Timmerman is the National Biotech Editor of Xconomy. E-mail him Follow @ldtimmerman


No comments

Be the first one to leave a comment.

Post a Comment