My colleagues and I recently published research showing that younger age groups are falling behind their parents in wealth accumulation and explaining the story behind our numbers. Some have raised questions about how we use our data, and I want to take some time to further explain our research.
Our study shows that the average wealth, or net worth, of these younger age groups has fallen fairly dramatically relative to older age groups. In response, some have said that median wealth is more important than average wealth. In fact, both are important. Average wealth tells us how a group is prospering as a whole relative to other groups; median wealth tells us how some “typical” person might be doing. One complication with focusing on median wealth is that it doesn’t show where all the remaining wealth goes. In a similar vein, if you were studying small business ownership by age or race, the median value might be zero for all groups. The average values would be greater than zero and thus would allow comparisons by groups.
Consider the median household age 56–64 in 2010. True, it is only slightly richer than the median household of a similar age in 1983 ($179,400 versus $143,150). Still, the median household age 29–37 in 1983 had $46,234 in wealth, but the median household in that age group in 2010 had only $15,900, less than half compared to their parents.
Median and average net worth by age is reported here. Come to your own conclusion.
Another footnote: Our study did not look at the decline in defined benefit wealth. However, the availability of such wealth has declined more for younger than older groups. Moreover, the valuation of defined benefits and annuities goes up for those who have them when interest rates go down. Older individuals with more defined benefit wealth technically saw the value of wealth go up after the Great Recession.
You can slice and dice these data in many ways, but the empirical data speak for themselves: younger age groups have fallen behind in relative terms. All sorts of factors are involved: the Great Recession and its impact on housing, student debt, wages, and so forth. Each is worthy of our attention.
The young have been faring poorly in the job market for some time now, a condition only exacerbated by the Great Recession. Now comes disturbing news that they are also falling behind in their share of society’s wealth and their rate of wealth accumulation.
Signe Mary McKernan, Caroline Ratcliffe, Sisi Zhang, and I recently examined how different age groups have shared in the rising net wealth of the U.S. economy. Despite the recent recession, our economy in 2010 was about twice as rich both in terms of average incomes and net worth as it was 27 years earlier in 1983. But not everyone shared equally in that growth.
Younger generations have been particularly left behind. Roughly speaking, those under age 46 today, generally the Gen X and Gen Y cohorts, hadn’t accumulated any more wealth by the time they reached their 30s and 40s than their parents did over a quarter-century ago. By way of contrast, baby boomers and other older generations, or those over age 46, shared in the rising economy—they approximately doubled their net worth.
Older Generations Accumulate, Younger Generations Stagnate
Change in Average Net Worth by Age Group, 1983–2010
Source: Authors’ tabulations of the 1983, 1989, 1992, 1995, 1998, 2001, 2004, 2007, and 2010 Survey of Consumer Finances (SCF).
Notes: All dollar values are presented in 2010 dollars and data are weighted using SCF weights. The comparison is between people of the same age in 1983 and 2010.
Households usually add to their saving as they age, while income and wealth rise over time with economic growth. If these two patterns apply consistently and proportionately, then one might expect to see, say, a parent generation accumulate $100,000 by the time its members were in their 30s and $300,000 in their 60s, whereas their children might accumulate $200,000 by their 30s and $600,000 by their 60s.
This normal pattern no longer holds for the younger among us. However, this reversal didn’t just start with the Great Recession; it seems to have begun even before the turn of the century. The young increasingly have been left behind.
Potential causes are many. The Great Recession hit housing hard, but it particularly affected the young, who were more likely to have the largest balances on their loans and the least equity relative to their home values. If a house value fell 20 percent, a younger owner with 20 percent equity would lose 100 percent in housing net worth, whereas an older owner with the mortgage paid off would witness a drop of only 20 percent.
As for the stock market, it has provided very low returns over recent years, but those who hung on through the Great Recession had most of their net worth restored to pre-recession values. Bondholders usually came out ahead by the time the recession ended as interest rates fell and underlying bonds often increased in value. Also making out well were those with annuities from defined benefit pension plans and Social Security, whose values increase when interest rates fall (though the data noted above exclude those gains in asset values). Older generations hold a much higher percentage of their portfolios in assets that have recovered or appreciated since the Great Recession.
As I mentioned earlier, however, the tendency for lesser wealth accumulation among the younger generations has been occurring for some time, so the special hit they took in the Great Recession leaves out much of the story. Here we must search for other answers to the question of why the young have been falling behind. Likely candidates for their relatively worse status, many of which are correlated, include
- a lower rate of employment when in the workforce;
- delayed entry into the workforce and into periods of accumulating saving;
reduced relative pay, partly due to their first-time-ever lack of any higher educational achievement relative to past generations;
- their delayed family formation, usually a harbinger and motivator of thrift and homebuilding;
- lower relative minimum wages; and
- higher shares of compensation taken out to pay for Social Security and health care, with less left over to save.
When it comes to conventional wisdom and media attention to distributional issues, there’s a tendency simply to attribute any particular disparity, such as the young falling behind in wealth holdings, to the growth in wealth inequality in society. But the two need not be correlated. Disparities can grow within both younger and older generations, without the young necessarily falling behind as a group.
Whatever the causes, we should also remember that public policy now places increased burdens on the young, whether in ever-higher interest payments on federal debts they will be left or the political exemption of older generations from paying for their underfunded retirement and health benefits. At the same time, state and local governments have given education lower priority in their budgets; pension plans for government workers now grant reduced and sometimes zero net benefits to new, younger hires; and homeownership subsidies post-recession increasingly favor the haves over the more risky have-nots.
Maybe, more than just maybe, it’s time to think about investing in the young.
In last week’s State of the Union speech, President Obama put great emphasis on expanding early childhood education. He’s not alone in recognizing the vital role of education as the launching pad for 21st century growth. George W. Bush wanted to be known as the “education president,” and so did his father, George H.W. Bush.
Many governors have similar aspirations. Jerry Brown, for instance, has gotten headlines for his efforts to restore the California university system to its former high status. State support for higher education has fallen dramatically there, particularly as a share of the budget and of Californians’ incomes but also in real terms. Brown even supported a tax increase to try to reverse this trend.
While I strongly support these types of effort, right now pro-education governors and the president are fighting a losing battle. Their new initiatives merely slow down their retreat against a health cost juggernaut.
California isn’t much different from many other states. The college bound and their parents witness this declining state support in the form of ever-rising costs and student debt. Less recognized is the fall in academic rankings of the nation’s leading public universities, such as many of the formerly extolled California universities and my own alma mater, the University of Wisconsin–Madison.
State support of education hasn’t just declined at postsecondary schools. In recent years, legislators have assigned K–12 education smaller shares of state budgets as well. During the recession, teachers were laid off and not replaced in many states. Efforts to expand early childhood education have also stalled, although the president’s initiative may give it some temporary momentum.
Federal spending policies only reinforce the longer-term anti-education trend. An annual Urban Institute study on the children’s budget suggests future continual declines in total federal support for education as long as current policies and laws hold up.
Education spending will continue to decline as long as health costs keep rising rapidly and eating up so much of the additional government revenues that accompany economic growth. The figure below, prepared by National Governors Association (NGA) Executive Director Dan Crippen and presented by his deputy, Barry Anderson, at a recent National Academy of Social Insurance conference, tells much of the state story: health costs essentially squeeze out almost everything else.
Fiscal 2011 data based on enacted budgets; fiscal 2012 data based on governor’s proposed budgets
Source: National Association of State Budget Officers, as presented by Dan Crippen, National Governors Association
These rising health costs don’t just place a squeeze on government budgets; they also are one source of the paltry growth in median household cash income over recent decades.
Within states, health costs show up primarily in the Medicaid budget. As the NGA numbers demonstrate, recent federal health reform did little and is expected to do little to control these state costs, despite large, mainly federally financed subsidies for expanding the number of people eligible for benefits.
With populations aging, state and federal governments now also face demographic pressures to increase their health budgets. Large shares of the Medicaid budget go for long-term and similar support for the elderly and the disabled. This budgetary threat also extends to revenues as larger shares of the population retire, earn less, and pay fewer taxes.
The next time someone tells you that we should wait another ten years to control health costs because we’ll be so much smarter and less partisan then, remind him or her that this procrastinating implicitly advocates further zeroing out state and federal spending on education—and the children’s budget more generally. Presidents and governors will never succeed with their education initiatives until they stop the health cost juggernaut in its tracks.
In theory, a household may be eligible for a broad range of government supports. Some are universally available, such as earned income tax credits and SNAP (formerly called food stamps) to a household with children if earnings are low enough. (See a previous short on this subject.) Others are only available to some people. For instance, government establishes waiting lists for programs like rental housing subsidies and limits number of years of participation in the traditional welfare program, now called Temporary Assistance to Needy Families or TANF.
The figure below assumes a single parent with two children is receiving almost all these benefits, an extreme case. It includes the more universally available programs, like SNAP. It also assumes the availability of the new Exchange subsidy provided by health reform. Benefits add up to close to $27,000 when this household fails to work and fall to about $8,000 as earnings increase to $40,000. Note that the graph does not take into account free child care support or income and Social Security taxes. When these are added to other benefit reductions, the household can sometimes even lose net income by earning more.
For further detail see my testimony before the House Subcommittees on Human Resources and Select Revenue Measures on June 27, 2012, “Marginal Tax Rates, Work, and the Nation’s Real Tax System.”
Many government programs automatically grant eligibility to all families with children, depending only on their income. As their incomes increase, however, these families often, but not always, receive fewer benefits. Some restrictions operate on a schedule: earn $1 more, get 30 cents less in benefits. Medicaid provides eligibility up to a given income level, then denies eligibility when one more dollar is earned (though usually with a delay). The dependent exemption only is available to those owing taxes, and only at high income levels is removed by the alternative minimum tax.
How do these programs interact?
The figure below considers a single parent household with children and shows how these various benefits vary as the earnings of the household increase. Because every household with children, including you and me if we are raising children, is eligible for these programs if our income falls in the right ranges, we can be said to belong to this benefit and benefit reduction system. For instance, as income increases from $10,000 to $40,000, our household would lose most earned income tax credits, SNAP (formerly known as food stamps), and much Medicaid, though under health reform other health subsidies would still be available. Note that in addition to these losses of benefits, direct tax rates from income and Social Security taxes would apply, though they are not shown here. For further detail see my testimony before the House Subcommittees on Human Resources and Select Revenue Measures on June 27, 2012, “Marginal Tax Rates, Work, and the Nation’s Real Tax System.” My next short describes the extreme welfare case.
Recent newspaper articles have highlighted autism studies that lean toward genetic causes on the one hand and environmental on the other. One notes correlations with the age of fathers and the genetic mutations that we all inherit but that increase with a father’s age. Another suggests that we have a weakened resistance to germs because we aren’t exposed to as many in our cleaner, less outdoor society. Most of us are also familiar with past studies that failed to find evidence for the popular thesis that immunizations given to young children increase the probability of autism.
These autism studies are mere examples of the many types of epidemiological research that try to investigate outbreaks of disease, assess exposure risks, or figure out why certain populations seem to be more or less immune to various health threats. The research often looks for both good and bad exceptions to averages. Malcolm Gladwell’s introduction to his popular book Outliers, for instance, points out the “Roseto Mystery”: the studies by Stewart Wolf and John Bruhn on why people living in Roseto, Pennsylvania, have relatively fewer heart attacks and live longer than those living elsewhere.
Progress? Yes. Yet someday these one-off studies will be likened to the late Middle Ages when it comes to medical science research. Not for their conclusions, but for the years, and, in some cases, decades of ex post data gathering required before any conclusions are reached.
Imagine a different world, in which data on these populations had already been transferred through electronic health records to the Centers for Disease Control (CDC) or a similar agency. In the world of big data, one doesn’t always work from casual observation to hypothesis to painstaking data gathering—sometimes guessing at the right sample populations to begin following, perhaps for years and decades into the future. In this imagined world, much data on them and on many comparison populations would already have been gathered.
Research, of course, is always somewhat haphazard. You never know what you are going to find, and when you find it, you need to determine whether “it” is genuine or an anomaly. But with large amounts of data already available, the odds of finding “it” and proving “it” are magnified.
In this new world, research could also proceed from computer-generated detections of correlations to hypothesis and theory, rather than the other way around—in some ways reversing the traditional methodology of modern science from Descartes onward. Thus, correlations at times are found even when not originally hypothesized, and discoveries may abound. Although some relationships may simply reflect random chance—flip a coin enough times and heads will eventually pop up 20 times in a row—rechecking is easy by testing different subsets of big data sets.
With so many relationships to be examined, whether with traditional or new methodology, new understandings can proliferate, as well as quicker rejection of hypotheses that cannot be substantiated. For autism, for instance, we would know much more quickly about its prevalence in different geographic regions with different environmental exposures and about the effectiveness of various interventions, from diets to drugs to early educational efforts.
Similarly, we would uncover much earlier warning signals, whether of a sudden flu epidemic or an increase in the prevalence of Alzheimer’s or heart disease by region, sex, race, or other characteristic.
For several years I was privileged to work with a group of very fine doctors, researchers, lawyers, economists, and other health care experts on the National Committee for Vital and Health Statistics. Its primary interest then––and, to some extent, now––was to expand the use of electronic health records (EHRs).
Many associate electronic health records with better transmission of information from one hospital, doctor, or other health care provider to the next. After Katrina, for instance, we were all appalled at the inability of victims to have their medical records available to those treating them in neighboring jurisdictions.
Others recognize that EHRs make it easier to detect sources of individual health problems. Thanks to EHRs, most pharmacists now get computer-generated information on drugs that contravene each other; doctors can plug symptoms into computers that spew out lists of possible causes, including some they might have neglected, forgotten, or never learned.
But, for many of us on the committee, we ultimately hoped to create a world in which much faster, more thorough, and more comprehensive public health research could be performed on the causes and possible cures for disease, malignancies, and chronic health conditions, outbreaks of new health problems, and local or regional stories of failure or success in places like Roseto. How many, when reading a story about a place like Roseto, realize that in today’s world we shouldn’t have to wait decades to accidentally discover such geographical variations?
In a talk I gave several years ago at the National Academies, I argued that we may achieve real progress only when consumers begin to demand these improvements. What if a subset of parents of autistic children demanded that their children’s health records be gathered together at the CDC or some other place? They would work with IT professionals, medical researchers, doctors, and teachers with special knowledge of autism to create common data fields. With enough participants, data provided by only a subset of cases would be sufficient for some research.
Add to these parents of autistic children the children of parents with Alzheimer’s or simply people like me who know the auto-immune problems that my children could have inherited from both sides of the family. What if we were to rank our doctors and their practices by how well they participate in such shared data gathering? What if some foundations helped organize these consumers?
In the end, organizing consumers so they can demand the possible may be more important than all the money in the world, which is what we seem to be spending on health care already without the progress we can and should be making.
We are on the cusp of great possibilities in health research, a scientific revolution of sorts. Big data, electronic health records, and government committees provide some of the wherewithal, but we’ve got to make the leap.
The Urban Institute recently released Kids’ Share 2012: Report on Federal Expenditures on Children through 2011, by Julia Isaacs, Katherine Toran, Heather Hahn, Karina Fortuny, and myself. It looks comprehensively at trends in federal spending and tax expenditures on children over the past 50 years. This sixth annual report is well worth a look if you are at all interested in how children fare in the federal budget.
In 2011, federal outlays on children fell by $2 billion, dropping from $378 billion in 2010 to $376 billion in 2011. This is the first time spending on children has fallen since the early 1980s.
However, children’s share of the spending has gone up and down over the last fifty years. Federal budget outlays on children as a percent of the domestic budget have declined from 20 percent in 1960 to 15 percent in 2011. Spending on children has not kept pace with growth in government spending over the last fifty years.
In the future, spending on children is expected to further decline, driven by budget pressures, which are strongest on the very types of programs on which kids rely: domestic discretionary programs like education that, unlike entitlements, do not grow automatically but require congressional funding each year. From 2011 to 2022, federal outlays are projected to grow by almost $1 trillion, but children gain almost nothing from this growth. In comparison, the nonchild portions of Social Security, Medicare, and Medicaid are projected to claim 91 percent of the increase.
As a result, children’s spending is projected to fall sharply as a share of the economy, from 2.5 percent of GDP in 2011 to 1.9 percent in 2022, below pre-recession levels. In 2017, Washington will start spending more on interest payments than on children.
Future decreased investment in children can be compared to increased investment on seniors. Their starting points are also different: in per-person terms in 2008, the federal government spent $3,822 on children and $26,355 on the elderly (in 2011 dollars). Take into account state and local spending, and a child on average still only gets about 45 percent as much as an elderly person.
The decrease in emphasis on children is part of a broader worrying trend that increasingly crimps investment, budgetary flexibility, and choices for the future.