Technology, Not Presidents, Drives Healthcare Costs

By | September 23, 2013

Last week I attended Brookings Papers on Economic Activity to watch some of the smartest economists in America debate some of the most interesting papers. The very first paper presented is of particularly timely interest: “Is This Time Different? The Slowdown in Healthcare Spending.”

Those of you who do not spend many happy waking hours parsing health statistics may be unaware that the rate of increase in healthcare spending has slowed in recent years. The administration and not a few people in the press are fond of claiming this as a victory for President Barack Obama’s Patient Protection and Affordable Care Act, aka Obamacare. The program is so fantastic on cost control, the argument goes, that providers have naturally started to control costs in preparation for the actual implementation. Authors Amitabh Chandra, Jonathan Holmes and Jonathan Skinner dismiss this explanation. Most of the cost controls haven’t kicked in yet, while one cost- increasing factor (the expansion of private insurance coverage to children under 26) has already taken effect. More importantly, as they note, “the downturn in health-care cost growth began in 2006, back when Barack Obama was still a relatively unknown senator from Illinois.”

Conservatives favor the Great Recession as the primary driver of cost declines. But most of the discussion gave this little weight; even if you think healthcare expenditure is very high, the decline in income is not large enough to explain the sharp slowdown in cost growth. A similar difficulty faces those who would attribute the bulk of the decline to more cost-sharing — it may help, but it’s just not significant enough to have produced the sharp slowdown our nation has experienced over the last 7 years. Overall, even over the last decade, the percentage of out-of-pocket costs has gone down, not up.

What about price controls? Here’s one of the interesting things from the paper: Medicare has held the line on prices, but utilization is high, so costs grew anyway. Private insurers have held the line on utilization, but had little success at controlling prices, so costs grew anyway. Medicaid has controlled both, but it’s not clear that this is a sustainable strategy. It’s quite hard to find a doctor taking new Medicaid patients, precisely because the reimbursements are so low. This is (barely) politically acceptable because Medicaid is not a huge percentage of the patient population, and anyway, it’s a population that doesn’t vote. This won’t necessarily still be politically viable as Medicaid expands to cover more (and slightly more affluent) people. Nor would it be economically viable as a system-wide strategy; not everyone can be the marginal-price consumer.

Which leaves us with technological change, and that’s what much of the discussion centered on. Cost growth, the argument goes, is largely driven by innovation. Not necessarily good innovation — quite a bit of time was spent denigrating Proton Beam facilities (used for cancer treatment) that cost in the tens of millions of dollars and don’t seem to do patients much good, yet whose total number is set to double between 2010 and 2014. And this is where I started to get nervous. Most participants agreed that if you want to control costs, you need to stop third-party payers from paying for new technologies — particularly Medicare, which is not very discriminating, and which makes it hard for private insurers to deny a treatment that the U.S. government has thereby endorsed. Several people argued rather hopefully that the government could do this — and maybe even would do this, with moves, in Medicare and Obamacare, toward bundled payments and “Accountable Care Organizations.” But no one offered any reason to believe that the government, or the ACOs, would only shut down bad innovation.

Five years ago, when the national healthcare debate began in earnest, I worried that national healthcare would slow innovation. The U.S. is not an efficient user of health care, I argued, but our lavish reimbursements fund innovation. Much of that innovation is bad, which is true of basically any technological frontier; it takes a lot of users, and a lot of iterations, to figure out what works and what doesn’t. But some of it is good — life enhancing, or even extending. If the U.S. shut down that engine, some people might be helped now, but a lot of people in the future might suffer or die from things we could have cured, if we hadn’t shut down the innovation machine.

Over the years, I’ve worried less about this. In part that’s because the dominant strain in healthcare reporting has been to emphasize wasted care. The now famous, and occasionally controversial, Dartmouth Atlas research suggested that variance in spending between Medicare regions was not associated with better outcomes, only higher incomes for healthcare providers. “There is a lot of unnecessary care which can be cut to save money without significantly worse outcomes” has been the conventional wisdom among a lot of healthcare policy wonks and reporters, particularly those who were counting on the savings from Medicare cuts and comparative effectiveness research to fund the coverage expansions they supported.

Of course, this also suggests that insurance was probably less valuable to current people than initially assumed, so it didn’t necessarily shift the calculus for supporting or opposing Obamacare. But as an issue, I relegated innovation to the simmer burner, especially because the pace of prescription drug discovery slowed down, for a lot of reasons. These things suggested that the cost of Obamacare, in terms of future lives lost or impaired, was more likely to be lower than I thought.

But more recently, I’ve been looking at other research that suggests that technology does matter quite a lot. Economist Joseph Doyle of the Massachusetts Institute of Technology has a series of papers that use “natural experiments” to try to isolate the effects of treatment intensity on health. In the first, he looked at victims of auto accidents and found that people with insurance were more likely to live than people without insurance. Why? Because they got more treatment. He found similar results when he looked at neonates who were just under the weight threshold that classified them as as “very low birth weight” babies (and thus got really intensive treatment), versus those who were just above the threshold, and thus got somewhat less intensive intervention. And in his most recent paper, he found a neat test: He looked at Florida tourists. Those folks were getting sick away from home, which eliminated a bunch of the normal factors you’d expect to confound your results (like social support or the fact that rich people may choose to live near better hospitals). All of them were at least wealthy, healthy and together enough to go on vacation. But when he compared folks who were sent to hospitals that treated a lot, he found that they had better outcomes than folks who got sent to places that did less.

In some sense, this is great news: More treatment does more! We have the tools to make ourselves healthier — and presumably, more tools mean more health! But for health wonks, it means that we have to sit down and look at some unhappy trade-offs. Basically, if you believe Doyle’s work, health insurance is probably pretty valuable — but its value depends on spending a bunch of money, which means that providing that health insurance is going to be very expensive. On the other hand, if you think he’s wrong, then Obamacare’s cost controls won’t harm us much. On the other hand, that also means that Obamacare probably isn’t going to do that much good, either.

Obviously, what we’d like is a system that simply stops doing stuff that doesn’t work. But as one of the participants pointed out, we kind of have that system, which is one possible reason why healthcare costs have moderated recently. Cardiologists found out — later than we’d like, to be sure — that stents don’t do as much good as we thought for patients with minor heart problems. So they’ve stopped doing them. Meanwhile, however, we’ve just developed left ventricular assist devices, which seem to do a lot of good (48 percent reduction in mortality) for patients with end-stage heart failure. There are 5 million patients in the U.S. with congestive heart failure, and LVADs cost, the participant noted, about $250,000 apiece. Now, not all of those patients will get (or require) LVADs, but even a small fraction will give a nice boost to national healthcare spending.

The process is slower than we’d like, but eventually, doctors mostly do stop doing procedures that don’t work. The problem is, we keep discovering new procedures that might work, and these procedures are expensive. Meanwhile, we also keep some of the old procedures that did turn out to work, so we’re still doing them too. Of course, you can argue that it doesn’t have to be that way — technological advance could lower costs, too. But so far, that hasn’t been the trend.

In a way, the most optimistic thing I heard was that Obamacare probably wouldn’t be any good at controlling costs. On the other hand, there goes the budget.

Topics USA InsurTech Tech

Was this article valuable?

Here are more articles you may enjoy.