Is the Affordable Care Act progressive in the most effective way?
In a very fine study, Henry Aaron and Gary Burtless at Brookings have looked at the ACA’s potential effects on income inequality and have preliminarily concluded that the ACA redistributes income—largely in the form of health benefits
fits—to the poorest one-third of Americans. Most of the law’s additional subsidies—the expansion of Medicaid and subsidies for those buying insurance on the exchange—are highest for those with the lowest incomes. Offsets, such as some new taxes, tend to be concentrated less at those lower income levels.
What the Aaron and Burtless’ study was never intended to assess—and a lingering 21st century concern with almost all government health policies—is the ACA’s effectiveness and efficiency, both for the public in general and those with modest means in particular. For instance, many rewards of our government health policy have traditionally been captured by health industry providers, who are able to charge consumers higher prices. A program can be progressive, but still end up charging the public an additional $2 for $1.50 or $1 worth of care.
The ACA does at times attempt to deal with some of these issues and includes several experiments. But it was mainly directed at improving access, not reducing health costs. Reforms beyond the ACA still are required on that front regardless of which political party accedes to power.
When the Obama administration recently delayed its mandate on out-of-pocket health costs, experts and politicians started debating whether this delay affects our longer-term ability to implement Obamacare. I don’t think it does, but I also think we’re missing the bigger point. Once again, the United States is facing the total disconnect between our nation’s health care policies (whether Obamacare, Ryancare, or “your favored politician’s name here”-care) and the simple, unavoidable arithmetic of health care costs.
Let’s examine the latest example. Obamacare includes a mandate on insurers that out-of-pocket health care costs cannot exceed $6,350 for an individual or $12,700 for a family, numbers often cited as “catastrophic.” At first glance, these limits may sound high: six or twelve thousand dollars is a sizeable expense. But consider: households spend an average of $23,000 a year on health care. If it is considered catastrophic to ask some households to pay $12,000 in out-of-pocket expenses, then how can—or, more accurately, how do—all households cover costs that average almost twice as much?
A similar mathematical conundrum is playing out in another part of Obamacare. Congress determined that we shouldn’t have to pay more than 9 or 10 percent of our income for a moderately comprehensive health policy in the new health exchanges. But
consider: health costs now average about one-fifth of personal income and one-third
of money income. So how do we cover the difference?
The simple answer is that if we don’t pay in one way, we pay in another. Mandate any new limit on what consumers have to pay directly—on out-of-pocket deductibles, Medicare co-payment rates, drug costs under the Part D legislation pushed by President George W. Bush, or our share of the cost of health insurance in President Obama’s new health care exchanges—and those expenses don’t simply disappear. They just get tacked on somewhere else.
Of course, I’m only talking about averages. So you might object, “Well, at least I’m not the one who pays.” Perhaps true, but not as much as you might think.
Perhaps you are fairly healthy and have lower health needs. Insurance policies, however, shift costs from the unhealthy to the healthy. That’s as it should be, but the cost of insurance adds to any out-of-pocket cost. Moreover, since unhealthy households tend to have lower incomes to start with, and on average probably can’t cover even average costs of $23,000, healthy households probably pay at least that average amount and probably more to cover the income shortfalls from the less healthy. So being healthy doesn’t let us off the hook.
Perhaps you are middle class or even poor. The government’s health policies redistribute costs from those with higher incomes to those with lower incomes, including the retired. We might think that we avoid paying these high health costs by shifting tax burdens to the rich. Unfortunately, government health costs are already so high that the middle class has to share in the burden of paying for them.
More importantly, even if those health costs could be placed entirely on the rich, the rest of us would still pay what are called opportunity costs. When our elected officials require that a tax be spent on health care, they simultaneously decide that it can’t be spent on education or training or highways or other goods and services. The decline in education spending in recent years while health costs continued to rise provides only the latest piece of evidence.
Regardless of cost shifts to the healthy and those with higher incomes, we still pay a lot ourselves, only indirectly. In particular, we pay through lower cash wages when employers purchase health insurance, an important but often ignored aspect of the slow growth in cash compensation for over three decades. We also pay a decent amount through our own taxes, including federal income and Medicare taxes and all those state excise and sales taxes, often on businesses, that get passed onto us in the form of higher prices on what we buy. Finally, we pay a lot by borrowing from China, Japan, oil-exporting countries, and, more recently, the Federal Reserve, and then passing those outstanding balances and interest payments to our children.
And if paying a lot isn’t bad enough, these methods of paying help insure that we don’t always get our money’s worth. There’s fairly clear evidence that for every $100 of costs pushed into indirect and hidden budgets, our costs rise by more than $100 as health care providers find it easier to raise their prices.
So, the next time someone tells you that we can’t afford health costs that are only a fraction of what we actually pay, ask him where he thinks the extra money comes from.
Worried about the stagnation of income among middle-income households? Or about the growth in health care costs? The two are not unrelated. In fact, middle-income families have witnessed far more growth than the change in their cash incomes suggest if we count the better health insurance most receive from employers or government. But is that all good news? Should ever-increasing shares of the income that Americans receive from government in retirement and other transfer payments go directly to hospitals and doctors as opposed to other needs of beneficiaries? Should workers receive ever-smaller shares of compensation in the form of cash?
The stagnation of cash incomes in the middle of the income distribution now goes back over three decades. Consider the period from 1980 to 2011. Cash income per member of a median income household, which includes items like wages and interest and cash payments from government like Social Security, only grew by about $4,300 or 27 percent over that period, when adjusted for inflation. From 2000 to 2010, it was even negative. Yet according to data from the Bureau of Economic Analysis, per capita personal income—our most comprehensive measure of individual income—grew 72 percent from 1980 to 2011.
How do we reconcile these statistics? By disentangling the many pieces that go into each measure.
Growing income inequality certainly plays a big part in this story: much of the growth in either cash or total personal income was garnered by those with very high incomes. So the growth in average income, no matter how measured, is substantially higher than the growth for a typical or median person who shared much less than proportionately in those gains. But personal income also includes many items that simply don’t show up in the cash income measures. Among them is the provision of noncash government benefits, such as various forms of food assistance.
Health care plays no small role. In fact, real national health care expenditures per person grew by 223 percent or $6,150 from 1980 to 2011, much more than the growth in median cash income. If we assume that the median-income household member got about the average amount of health care and insurance, then we can see how little their increased cash income tells them or us about their higher standard of living.
Getting a bit more technical, there’s a danger of over-counting and under-counting health care costs here. Some of the median or typical person’s additional cash income went to extra health care expenses, so the additional amount he/she had left for all other purposes was even less than $4,300. However, individuals pay only a small share of their health care expenses; the vast majority is covered by government, employer, or other third-party payments. So, roughly speaking, typical or median individuals still got well more than half of their income growth in the form of health benefits.
The implications stretch well beyond middle-class stagnation. Employers face rising pressures to drop insurance so they can provide higher cash wages. For instance, providing a decent health insurance package to a family can be equivalent roughly to a doubling of employer costs for a worker paid minimum wage. The government, in turn, faces a different squeeze: as it allocates ever-larger shares of its social welfare budget for health care, it grants smaller shares to education, wage subsidies, child tax credits, and most other efforts. Additionally, the more expensive the health care the government provides to those who don’t work, the greater the incentives for them to retire earlier or remain unemployed.
In the end, the health care juggernaut leaves us with good news (that our incomes indeed are growing moderately faster than most headlines would have us believe) as well as bad news (that health care remains unmerciful in what it increasingly takes out of our budget).
In last week’s State of the Union speech, President Obama put great emphasis on expanding early childhood education. He’s not alone in recognizing the vital role of education as the launching pad for 21st century growth. George W. Bush wanted to be known as the “education president,” and so did his father, George H.W. Bush.
Many governors have similar aspirations. Jerry Brown, for instance, has gotten headlines for his efforts to restore the California university system to its former high status. State support for higher education has fallen dramatically there, particularly as a share of the budget and of Californians’ incomes but also in real terms. Brown even supported a tax increase to try to reverse this trend.
While I strongly support these types of effort, right now pro-education governors and the president are fighting a losing battle. Their new initiatives merely slow down their retreat against a health cost juggernaut.
California isn’t much different from many other states. The college bound and their parents witness this declining state support in the form of ever-rising costs and student debt. Less recognized is the fall in academic rankings of the nation’s leading public universities, such as many of the formerly extolled California universities and my own alma mater, the University of Wisconsin–Madison.
State support of education hasn’t just declined at postsecondary schools. In recent years, legislators have assigned K–12 education smaller shares of state budgets as well. During the recession, teachers were laid off and not replaced in many states. Efforts to expand early childhood education have also stalled, although the president’s initiative may give it some temporary momentum.
Federal spending policies only reinforce the longer-term anti-education trend. An annual Urban Institute study on the children’s budget suggests future continual declines in total federal support for education as long as current policies and laws hold up.
Education spending will continue to decline as long as health costs keep rising rapidly and eating up so much of the additional government revenues that accompany economic growth. The figure below, prepared by National Governors Association (NGA) Executive Director Dan Crippen and presented by his deputy, Barry Anderson, at a recent National Academy of Social Insurance conference, tells much of the state story: health costs essentially squeeze out almost everything else.
Fiscal 2011 data based on enacted budgets; fiscal 2012 data based on governor’s proposed budgets
Source: National Association of State Budget Officers, as presented by Dan Crippen, National Governors Association
These rising health costs don’t just place a squeeze on government budgets; they also are one source of the paltry growth in median household cash income over recent decades.
Within states, health costs show up primarily in the Medicaid budget. As the NGA numbers demonstrate, recent federal health reform did little and is expected to do little to control these state costs, despite large, mainly federally financed subsidies for expanding the number of people eligible for benefits.
With populations aging, state and federal governments now also face demographic pressures to increase their health budgets. Large shares of the Medicaid budget go for long-term and similar support for the elderly and the disabled. This budgetary threat also extends to revenues as larger shares of the population retire, earn less, and pay fewer taxes.
The next time someone tells you that we should wait another ten years to control health costs because we’ll be so much smarter and less partisan then, remind him or her that this procrastinating implicitly advocates further zeroing out state and federal spending on education—and the children’s budget more generally. Presidents and governors will never succeed with their education initiatives until they stop the health cost juggernaut in its tracks.
One of the many dilemmas surrounding federal health care policies is that the government only partially insures most people when it subsidizes health care, but we want to pretend that once “insured” we are all entitled to the maximum health care available. This puts a lot of weight on the definition of “insurance” and creates misunderstandings about what the government does and does not do.
This issue came up in a column by Bruce Bartlett, who notes that Republicans may now oppose an individual mandate, but they do support (directly or indirectly) a mandate on hospitals to provide emergency care. Moreover, while ignoring their effective support of this mandate, and the effective taxes necessary to pay for it, Republicans maintain that the emergency-care mandate means that everyone has some amount of insurance coverage, however partial it may be.
This debate raises the question of what it means to be “insured.” No government plan covers everything. For those soon to have access to the exchange subsidy available through Obamacare, the “silver” and “bronze” plans that could be subsidized still cover only some costs. Medicaid, in turn, generally pays providers less than do other insurance plans; as one result, the more highly paid (and, often, more highly skilled) providers are less available. Similarly, Medicare does not cover all health services, including long-term care, and some doctors now refuse new Medicare patients, though that system’s payment rate is still higher than Medicaid’s.
You may argue that you want equal coverage—if some people get Cadillac coverage, everyone should. However, no elected official from either party seems willing to raise the taxes necessary to pay for such an expensive system. The reason is obvious: such health care would absorb all the revenue currently raised by the federal government and then some, leaving nothing for other government functions.
Even then, some people would step outside the system and buy a Mercedes policy, so inequality in health care would remain. Thus, the notion that everyone gets the same health insurance coverage, even in the most nationalized health system, is pure myth. But if people are not going to receive the Cadillac or Mercedes coverage from government that others obtain privately, how should Congress design policy with those multiple gaps in mind?
I don’t think there is any easy answer, but I do think that researchers and analysts should be more precise when reporting on “insurance” coverage. For example, the Congressional Budget Office produces counts of how many people would be insured under various options, but such estimates by themselves are misleading. Insured and not insured for what? For instance, if everyone received a simple (say, $5,000) voucher, with few restrictions other than that it must cover health care, almost everyone would buy at least a $5,000 insurance policy. On the other hand, if government dictated that the voucher had to be used to buy an expensive plan that many people couldn’t afford, then supplying a voucher would not produce fairly universal (yet partial) coverage.
Alternatively, one can’t assume that a highly regulated system will automatically provide whatever care is specified, since what it pays affects which providers participate in the system. The implicit assumption—and I am not judging it here—may be that many providers are so overpaid that cutbacks would have only limited effect on the care provided or the quality of the doctors and nurses who would accept a lower-paying career.
The ideal but difficult approach for researchers and budget offices, I think, is to note as best as possible what coverage is provided by regulation or subsidization of emergency rooms, Medicaid, Medicare, exchanges—indeed, of each government engagement in the health care economy. Note the expected gaps, whether in preventive care, higher-priced doctors, drugs, or other services. Finally, compare the extent to taxpayers and insured individuals avoid coverage gaps by paying higher taxes or more for their insurance.
In any case, a dichotomous count of who is “insured” or “not insured” is too simplistic. Almost any government health insurance policy is partial in care and cost. If Republicans want to claim that emergency room care is a type of insurance, then they should also acknowledge what is not insured through that mechanism and the implicit taxes on those who end up covering the emergency room cost. If Democrats want to claim that vouchers provide less insurance than a more regulated system, then they, too, should specify just what additional insurance they claim will be covered, at what cost to whom. Both parties should also make coverage comparisons for systems that are equally cost constrained.
Democrats and Republicans Favor Medicare Cuts and Then Deny It
Medicare is taking on a primary role in the presidential race. The discussion often turns to whether the program should continue in its current form, with more direct government controls over costs, or shift its emphasis to vouchers or premium support plans. Let’s try to set the record straight.
Lowering Medicare spending growth over the next 10 years from, say, an additional $500 billion to an additional $400 billion means spending $100 billion less on covered services. It doesn’t matter for budget purposes the source of the saving. It is a benefit reduction.
Both presidential candidates claim to save money on Medicare without cutting benefits. President Obama says his reforms “will save Medicare money by getting rid of wasteful spending…that won’t touch your guaranteed Medicare benefits. Not by a single dime.” Meanwhile, Governor Romney promises that his “premium support” plan will save money while still providing “coverage and service at least as good as what today’s seniors receive.”
But politicians aren’t the only ones dispensing that free-lunch rhetoric. Even highly respected journalists and researchers get pulled into it.
Consider two New York Times stories. After the first presidential debate, Michael Cooper, Jackie Calmes, Annie Lowrey, Robert Pear and John M. Broder said that President Obama “DID NOT CUT BENEFITS by $716 billion over 10 years as part of his 2010 health care law; rather, he reduced Medicare reimbursements to health care providers.” A few days later, David Brooks cited an AMA study of a premium support plan put forward by vice presidential candidate Paul Ryan and Democratic Senator Ron Wyden, saying that “costs might have come down by around 9 percent with NO REDUCTION IN BENEFITS” [cap emphases mine].
Can you see what is going on? Politicians, reporters, and experts all recognize that cost growth must be brought under control. But they also want to suggest that benefits won’t be reduced—if only we go with a particular approach.
It’s one thing to say that we can spend $100 billion less on health care so we can use the money better for education or tax cuts or paying off our debt. But it’s another thing to pretend that we can get $100 billion more in educational benefits or money in our pockets and absolutely the same quality of health care.
We know from personal experience that certain medical procedures, at the end of the day, are worthless or worse. But there’s no budget line called “worthless health care” that our elected officials can bravely vote to reduce.
Instead, we are left with blunt instruments to control costs. A Medicare board may recommend or members of Congress may elect to cut payments to providers, as they have done many times in the past. One can argue such cutting may not produce a great loss in services, depending upon how providers and consumers react. But no loss whatsoever? Come on! Try lowering government payments for anything—rental vouchers, school lunches, highways—and see if the same services are provided.
Similarly, suppose that Congress puts more Medicare recipients into a premium support system, like Medicare Advantage–type plans run by health maintenance and similar organizations. The system then limits the growth rate of payments to those groups. Again, there’s less money to go around.
Both the regulatory and voucher approaches have a precise accounting correspondence. If the government spends $100 billion less, then it purchases $100 billion less in services and makes $100 billion fewer payments to providers.
Back to the presidential and vice presidential debates. Directly trying to control prices for individual services may not have the same effect as trying to control the total amount paid for all services under a premium, and vice versa. But no candidate can deny that he favors benefit cuts relative to today’s unsustainable promises.
To add to the confusion, each side talks as if some idealized system of cost control or premium support exists. Almost inevitably, we will be taking ideas from both approaches. We’ll cut back on high reimbursement rates when we believe the effect on actual services would be moderate and, at the same time, use limited budgets to encourage providers to operate more efficiently. For instance, we might lower the payment rates for many operations faster and simultaneously induce more Medicare recipients to opt into groups like Kaiser-Permanente that make many allocation decisions within a fixed budget.
Ferreting out the truth in this Medicare debate also requires looking beyond health care. Benefit losses in health care must be contrasted with benefit gains elsewhere. Yet even health care will likely be much worse if we continue to borrow hundreds of billions of dollars more from unfriendly nations and let excessive debt inhibit economic growth.
Bottom line: both parties favor cutting Medicare benefits, or, more accurately, slowing down the rate of benefit growth. The issue isn’t whether but how this can best be done.
Recent newspaper articles have highlighted autism studies that lean toward genetic causes on the one hand and environmental on the other. One notes correlations with the age of fathers and the genetic mutations that we all inherit but that increase with a father’s age. Another suggests that we have a weakened resistance to germs because we aren’t exposed to as many in our cleaner, less outdoor society. Most of us are also familiar with past studies that failed to find evidence for the popular thesis that immunizations given to young children increase the probability of autism.
These autism studies are mere examples of the many types of epidemiological research that try to investigate outbreaks of disease, assess exposure risks, or figure out why certain populations seem to be more or less immune to various health threats. The research often looks for both good and bad exceptions to averages. Malcolm Gladwell’s introduction to his popular book Outliers, for instance, points out the “Roseto Mystery”: the studies by Stewart Wolf and John Bruhn on why people living in Roseto, Pennsylvania, have relatively fewer heart attacks and live longer than those living elsewhere.
Progress? Yes. Yet someday these one-off studies will be likened to the late Middle Ages when it comes to medical science research. Not for their conclusions, but for the years, and, in some cases, decades of ex post data gathering required before any conclusions are reached.
Imagine a different world, in which data on these populations had already been transferred through electronic health records to the Centers for Disease Control (CDC) or a similar agency. In the world of big data, one doesn’t always work from casual observation to hypothesis to painstaking data gathering—sometimes guessing at the right sample populations to begin following, perhaps for years and decades into the future. In this imagined world, much data on them and on many comparison populations would already have been gathered.
Research, of course, is always somewhat haphazard. You never know what you are going to find, and when you find it, you need to determine whether “it” is genuine or an anomaly. But with large amounts of data already available, the odds of finding “it” and proving “it” are magnified.
In this new world, research could also proceed from computer-generated detections of correlations to hypothesis and theory, rather than the other way around—in some ways reversing the traditional methodology of modern science from Descartes onward. Thus, correlations at times are found even when not originally hypothesized, and discoveries may abound. Although some relationships may simply reflect random chance—flip a coin enough times and heads will eventually pop up 20 times in a row—rechecking is easy by testing different subsets of big data sets.
With so many relationships to be examined, whether with traditional or new methodology, new understandings can proliferate, as well as quicker rejection of hypotheses that cannot be substantiated. For autism, for instance, we would know much more quickly about its prevalence in different geographic regions with different environmental exposures and about the effectiveness of various interventions, from diets to drugs to early educational efforts.
Similarly, we would uncover much earlier warning signals, whether of a sudden flu epidemic or an increase in the prevalence of Alzheimer’s or heart disease by region, sex, race, or other characteristic.
For several years I was privileged to work with a group of very fine doctors, researchers, lawyers, economists, and other health care experts on the National Committee for Vital and Health Statistics. Its primary interest then––and, to some extent, now––was to expand the use of electronic health records (EHRs).
Many associate electronic health records with better transmission of information from one hospital, doctor, or other health care provider to the next. After Katrina, for instance, we were all appalled at the inability of victims to have their medical records available to those treating them in neighboring jurisdictions.
Others recognize that EHRs make it easier to detect sources of individual health problems. Thanks to EHRs, most pharmacists now get computer-generated information on drugs that contravene each other; doctors can plug symptoms into computers that spew out lists of possible causes, including some they might have neglected, forgotten, or never learned.
But, for many of us on the committee, we ultimately hoped to create a world in which much faster, more thorough, and more comprehensive public health research could be performed on the causes and possible cures for disease, malignancies, and chronic health conditions, outbreaks of new health problems, and local or regional stories of failure or success in places like Roseto. How many, when reading a story about a place like Roseto, realize that in today’s world we shouldn’t have to wait decades to accidentally discover such geographical variations?
In a talk I gave several years ago at the National Academies, I argued that we may achieve real progress only when consumers begin to demand these improvements. What if a subset of parents of autistic children demanded that their children’s health records be gathered together at the CDC or some other place? They would work with IT professionals, medical researchers, doctors, and teachers with special knowledge of autism to create common data fields. With enough participants, data provided by only a subset of cases would be sufficient for some research.
Add to these parents of autistic children the children of parents with Alzheimer’s or simply people like me who know the auto-immune problems that my children could have inherited from both sides of the family. What if we were to rank our doctors and their practices by how well they participate in such shared data gathering? What if some foundations helped organize these consumers?
In the end, organizing consumers so they can demand the possible may be more important than all the money in the world, which is what we seem to be spending on health care already without the progress we can and should be making.
We are on the cusp of great possibilities in health research, a scientific revolution of sorts. Big data, electronic health records, and government committees provide some of the wherewithal, but we’ve got to make the leap.