February 06, 2014
A Prime-Aged Look at the Employment-to-Population Ratio
Trying to interpret changes in labor utilization measures such as the employment-to-population ratio is complicated by the fact that they do not refer to the same set of people over time. The age composition of the population is changing, and behavior can vary across and within age cohorts.
This issue is illustrated in a recent New York Fed study of the employment-to-population ratio by Samuel Kapon and Joseph Tracy. This ratio nosedived during the recent recession by about 4 percentage points and has barely budged since.
This measure of labor utilization is the clear laggard on any labor market recovery dashboard. But the authors show that it is not so clear that the employment-to-population ratio is really so far from where it should be, once you control for the fact the employment rates tend to be lower for younger and older people and that the age composition within the population has shifted over time. This idea is similar to the one used to estimate the trend labor force participation rate in this Chicago Fed study by Daniel Aaronson, Jonathan Davis, and Luojia Hu. The issue of controlling for dominant demographic trends is one of the reasons we at the Atlanta Fed decided not to feature either the overall employment-to-population ratio or the overall labor force participation rate in our Labor Market Spider Chart.
A simple, and admittedly crude, alternative to computing the demographically adjusted employment-to-population ratio trend is to look at a segment of the population that is on a relatively flat part of the employment (or participation) rate curve. A common standard for this is the so-called prime-aged population (people aged 25 to 54). These individuals are less likely to be making retirement decisions than older individuals and are less likely to be making schooling decisions than younger people. Of course, this approach doesn't control for within-cohort factors like educational differences.
So what do we find? The prime-aged employment-to-population ratio declined almost 5 percentage points between the end of 2007 and 2009 (versus 4 percentage points overall) and since then has recovered about 25 percent of that decline. Using the end of 2007 as reference, the Kapon and Tracy trend estimate has declined about 1.7 percentage points, which implies the overall employment-to-population ratio, by not continuing to decline, has improved by about 40 percent.
Then what does the analysis say about labor utilization in the wake of the recession? Once demographic factors are controlled for, both aforementioned measures indicate that labor-resource utilization has improved relative to trend. In fact, as Kapon and Tracy note, the relative improvement would be even greater if you believed that employment was above trend before the recession.
By John Robertson, a vice president and senior economist in the Atlanta Fed's research department
January 31, 2014
A Brief Interview with Sergio Rebelo on the Euro-Area Economy
Last month, we at the Atlanta Fed had the great pleasure of hosting Sergio Rebelo for a couple of days. While he was here, we asked Sergio to share his thoughts on a wide range of current economic topics. Here is a snippet of a Q&A we had with him about the state of the euro-area economy:
Sergio, what would you say was the genesis of the problems the euro area has faced in recent years?
The contours of the euro area’s problems are fairly well known. The advent of the euro gave peripheral countries—Ireland, Spain, Portugal, and Greece—the ability to borrow at rates that were similar to Germany's. This convergence of borrowing costs was encouraged through regulation that allowed banks to treat all euro-area sovereign bonds as risk free.
The capital inflows into the peripheral countries were not, for the most part, directed to the tradable sector. Instead, they financed increases in private consumption, large housing booms in Ireland and Spain, and increases in government spending in Greece and Portugal. The credit-driven economic boom led to a rise in labor costs and a loss of competitiveness in the tradable sector.
Was there a connection between the financial crisis in the United States and the sovereign debt crisis in the euro area?
Simply put, after Lehman Brothers went bankrupt, we had a sudden stop of capital flows into the periphery, similar to that experienced in the past by many Latin American countries. The periphery boom quickly turned into a bust.
What do you see as the role for euro area monetary policy in that context?
It seems clear that more expansionary monetary policy would have been helpful. First, it would have reduced real labor costs in the peripheral countries. In those countries, the presence of high unemployment rates moderates nominal wage increases, so higher inflation would have reduced real wages. Second, inflation would have reduced the real value of the debts of governments, banks, households, and firms. There might have been some loss of credibility on the part of the ECB [European Central Bank], resulting in a small inflation premium on euro bonds for some time. But this potential cost would have been worth paying in return for the benefits.
And did this happen?
In my view, the ECB did not follow a sufficiently expansionary monetary policy. In fact, the euro-area inflation rate has been consistently below 2 percent and the euro is relatively strong when compared to a purchasing-power-parity benchmark. The euro area turned to contractionary fiscal policy as a panacea. There are good theoretical reasons to believe that—when the interest rate remains constant that so the central bank does not cushion the fall in government spending—the multiplier effect of government spending cuts can be very large. See, for example, Gauti Eggertsson and Michael Woodford, “The Zero Interest-rate Bound and Optimal Monetary Policy,” and Lawrence Christiano, Martin Eichenbaum, and Sergio Rebelo, "When Is the Government Spending Multiplier Large?”
Theory aside, the results of the austerity policies implemented in the euro area are clear. All of the countries that underwent this treatment are now much less solvent than in the beginning of the adjustment programs managed by the European Commission, the International Monetary Fund, and the ECB.
Bank stress testing has become a cornerstone of macroprudential financial oversight. Do you think they helped stabilize the situation in the euro area during the height of the crisis in 2010 and 2011?
No. Quite the opposite. I think the euro-area problems were compounded by the weak stress tests conducted by the European Banking Association in 2011. Almost no banks failed, and almost no capital was raised. Banks largely increased their capital-to-asset ratios by reducing assets, which resulted in a credit crunch that added to the woes of the peripheral countries.
But we’re past the worst now, right? Is the outlook for the euro-area economy improving?
After hitting the bottom, a very modest recovery is under way in Europe. But the risk that a Japanese-style malaise will afflict Europe is very real. One useful step on the horizon is the creation of a banking union. This measure could potentially alleviate the severe credit crunch afflicting the periphery countries.
Thanks, Sergio, for this pretty sobering assessment.
By John Robertson, a vice president and senior economist in the Atlanta Fed’s research department
Editor’s note: Sergio Rebelo is the Tokai Bank Distinguished Professor of International Finance at Northwestern University’s Kellogg School of Management. He is a fellow of the Econometric Society, the National Bureau of Economic Research, and the Center for Economic Policy Research.
January 17, 2014
What Accounts for the Decrease in the Labor Force Participation Rate?
Despite the addition of only 74,000 jobs to the economy in December, the unemployment rate dropped significantly—from 7 percent to 6.7 percent. The decline came mostly from a decrease in the labor force.
Since the recession began, the labor force participation rate (LFPR) has dropped from 66 percent to 63 percent. Many people have left the labor force because they are discouraged from applying (U.S. Bureau of Labor Statistics data indicate that a little under 1 million people fall into this category). But the primary drivers appear to be an increase in the number of people who are either retired, disabled/ill, or in school.
Certainly, the aging of the population accounts for much of the increase in the retired and disabled/ill categories. Still, there has been a lot of movement over the past few years in the reasons people cite for not participating in the labor force within age groups. Knowing the reasons why people have left (or delayed entering) the labor force can help us understand how much of the decline will likely halt once the economy picks back up and how much is permanent. (For more on this topic, see here, here, and here.)
The chart below shows the distribution of reasons in the fourth quarter of 2013. (Of the people not in the labor force, 1.6 percent indicate they want a job and give a reason for not being in the labor force. They are categorized here as "want a job" only.) Young people are not in the labor force mostly because they are in school. Individuals 25 to 50 years old who are not in the labor force are mostly taking care of their family or house. After age 50, disability or illness becomes the primary reason people do not want to work—until around age 60, when retirement begins to dominate.
How has this distribution changed over the past seven years? For simplicity, I've grouped people by age to show changes over time in the reasons people give for not being in the labor force. However, you can also see an interactive version of the same data without age buckets—and download the data—here.
Of the 12.6 million increase in individuals not in the labor force, about 2.3 million come from people ages 16 to 24, and of that subset, about 1.9 million can be attributed to an increase in school attendance (see the chart below). In particular, young people aged 19 to 24 are more likely to be in school now than before the recession. Among college-age people, those absent from the labor force because they are in school rose from 57 percent to 60 percent. Among people of high school age, the share not in the labor force because they are in school rose from 87 percent to 88 percent.
The number of middle-aged workers not in the labor force rose by 1.8 million (or 11 percent), with four main factors driving the increase.* "Wants a Job" increased 546,000 (34 percent). The "In School" category increased 438,000 (a 38 percent rise). "Disability/Illness" rose 393,000 (an 8 percent rise), and 302,000 more people said they were retired (a 43 percent rise; see the chart below).
Among individuals aged 51 to 60, those not in the labor force increased by 1.6 million (or 16 percent). This increase came almost entirely from the number of people who are disabled or ill, which rose by 1.3 million (a 33 percent increase). Interestingly, the number of retired individuals actually fell by 305,000 between the fourth quarter of 2007 and the fourth quarter of 2010. Since then, the number of retired people within this age group has risen 183,000 but remains 122,000 lower than fourth-quarter 2007 levels. So it seems more people in this age group were delaying retirement instead of leaving early (see the chart below).
About 6.8 million of the 12.6 million increase in those not in the labor force came from the 61-and-over category. An additional 5.3 million (a 17 percent increase) are retired, and 1 million more (a 34 percent increase) are not in the labor force because they are disabled or ill. The other categories were little changed (see the chart below).
In total, the number of people not in the labor force rose by 12.6 million (16 percent) from the fourth quarter of 2007 to the fourth quarter of 2013. About 5.5 million more people (a 16 percent increase) are retired, 2.9 million (a 23 percent increase) are disabled or ill, and 2.5 million (a 19 percent increase) are in school. An additional 161,000 are taking care of their family or house, and an additional 99,000 are not in the labor force for other reasons. The fraction who say they want a job has risen the most (32 percent) but has contributed only 11 percent to the total change. The chart below shows the overall contributions by reason to the changes in labor force participation for all age groups since the onset of the recession.
What further changes can we anticipate? It's hard to say, as many moving parts are at play. Most people currently in school will be approaching the labor market upon graduation. But increased college and graduate school enrollment could augur a permanent shift in the portion of the population who are in school instead of the labor force. We can also expect continued downward pressure on the LFPR from retiring baby boomers as well as boomers who exit the labor force because of disability or illness.
Last, the portion of people who want a job has increased the most since the recession began, and is currently 1.4 million above its prerecession level. People in this category tend to have greater labor force attachment, making them more likely to shift into the labor force. In fact, the number of people in this category has already started to decrease—and is down 709,000 from the fourth quarter 2012.
My Atlanta Fed colleagues Julie Hotchkiss and Fernando Rios-Avila in their 2013 paper "Identifying Factors behind the Decline in the U.S. Labor Force Participation Rate," looked at a range of LFPR projections for 2015–17 based on different labor market assumptions. Depending on the future strength of the U.S. labor market, the projections are highly varying—ranging between a decline of 2.4 percentage points and an increase of 2 percentage points from the 2010–12 average of 64.1 percent. So far, more factors are pulling down the LFPR than pushing it up; the latest reading for December 2013 is already 1.3 percentage points below the 2010–12 average. At that pace, the Hotchkiss et al. lower-bound estimate will be reached before the end of 2014, unless the dynamics change as the economy further improves.
By Ellyn Terry, an economic policy analysis specialist in the research department of the Atlanta Fed
* I've chosen to break the "middle-age" grouping at age 50 instead of 54 because the probability of retiring has changed in different ways over the past few years for the 25- to 50-year-old group and the 51- to 60-year-old group. See the chart mentioned earlier for more detail.
January 14, 2014
A Football Field of Labor Market Progress
The December meeting of the Federal Open Market Committee (FOMC), as summarized in the minutes published last week, debated the context for tapering the quantitative easing (QE) program of asset purchases and adjusting the FOMC’s forward guidance on the federal funds rate. One of the issues debated was postrecession progress in the labor market. For example, participants struggled with the reasons for the large drop in labor force participation in recent years:
Some participants cited research that found that demographic and other structural factors, particularly rising retirements by older workers, accounted for much of the recent decline in participation. However, several others continued to see important elements of cyclical weakness in the low labor force participation rate and cited other indicators of considerable slack in the labor market, including the still-high levels of long-duration unemployment and of workers employed part time for economic reasons and the still-depressed ratio of employment to population for workers ages 25 to 54. In addition, although a couple of participants had heard reports of labor shortages, particularly for workers with specialized skills, most measures of wages had not accelerated. A few participants noted the risk that the persistent weakness in labor force participation and low rates of productivity growth might indicate lasting structural economic damage from the financial crisis and ensuing recession.
In a speech on Monday, Atlanta Fed President Dennis Lockhart emphasized similar concerns. He posed the question of whether the improvement in the unemployment rate since the end of the recession, now having recovered about 65 percent of its 2007–09 increase, is overstating the actual progress in the utilization of the nation’s labor resources. President Lockhart observes:
But the unemployment rate is influenced by labor force participation, and there has been a sizable decline in the share of the population in the labor force since 2009. This explains how you could get a big drop in the unemployment rate with anemic job gains, as occurred in December.
The labor force participation rate has fallen from 65.8 percent of the population at the end of 2008 to 62.8 percent in December 2013. On this, President Lockhart notes:
Some of the decline in labor force participation since 2009 is due to the baby boomers retiring, but even among prime-age workers—those aged 25 to 54—the participation rate is down significantly [2.1 percentage points]. This suggests that other factors, such as low prospects of finding a job, are playing a role.
To examine this possibility, we can look at the sum of marginally attached workers. These are people who say they are willing to work and have looked for work recently but are not currently looking.
The marginally attached are not counted in the official labor force statistic. During the recession, the number of marginally attached swelled (from around 1.4 million at the end of 2007 to 2.4 million at the end of 2009). Since the end of 2009, the marginally attached rate (as a share of the labor force including marginally attached) has retraced only 12 percent of the recessionary increase. From this, President Lockhart concludes:
It’s accurate to say the country has a large number of people in the so-called “shadow labor force.”
Because the sharp decline in labor force participation is not fully understood, and because the unemployment rate decline conflates declines in participation with employment gains, President Lockhart suggests it is useful to also look at the share of the prime-age population that is employed. Between the end of 2007 and 2009 the employment-to-population rate for this group declined from 79.7 to 74.8 percent. Since 2009, employment gains for the core of the workforce have advanced only 27 percent toward the prerecession peak (for the entire population over age 16, the recovery is essentially zero). Variations on this theme can be seen here and here.
Usually, the employment to population rate and the unemployment rate move in lock step (because labor force movements are very gradual). But that has not been the case during this recovery.
In addition to unemployment, President Lockhart highlights the issue of underemployment:
Many Americans are working fewer hours than they would prefer because their employers are offering them only part-time work. The share of workers who are involuntarily working part-time doubled during the recession and has moved only about 30 percent lower since the recovery began.
So, on the question of whether the unemployment rate decline has overstated actual progress in labor utilization, Lockhart says yes:
To sum up, these comparisons of employment data suggest that the labor market is not as healthy as the improved unemployment rate might suggest. The unemployment rate drop may overstate progress achieved.
The Atlanta Fed has been featuring the labor market spider chart tool on its website as a way to track relative progress in a number of labor market indicators since the end of the recession. For the purposes of President Lockhart’s speech, the relative improvement in various indicators of the rate of labor utilization was presented graphically in the form of yardage gains from the goal-line of a football field. The changes can be seen here (the data are from the U.S. Bureau of Labor Statistics and Atlanta Fed calculations). The idea is that the labor utilization “team” was driven back to its own goal line from the end of 2007 through the end of 2009, and the graphic shows how many yards (percent) the team has recovered as of the January 10 labor report. (The use of a football field image is perhaps appropriate, given that the recent BCS championship game featured two teams from the Sixth District.)
President Lockhart also suggests a link between labor market slack and the weak pricing trends we have experienced in recent years:
It’s worth noting that wage and salary income growth remains weak. I hear very little from business contacts about upward wage pressures except in a few specialized job categories. Wage pressures usually accompany growing demand and rising inflation but, although demand appears to be growing, inflation is very soft.
In fact, looking at the recent disinflation apparent in virtually all consumer price statistics relative to the FOMC’s longer-run objective, President Lockhart acknowledges the risk of an inflation “safety”:
...I think inflation will stabilize and begin to move back in the direction of the FOMC’s 2 percent objective as the economy gathers momentum. So I’m interpreting the soft inflation numbers as a risk signal. Through the lens of prices, the economy could be weaker than we currently believe.
By John Robertson, a vice president and senior economist in the Atlanta Fed’s research department
January 08, 2014
Money as Communication: A New Educational Video by the Atlanta Fed
Roughly a year ago, the Federal Open Market committee (FOMC) switched from date-based forward guidance on the federal funds rate path to guidance based on economic conditionality. The idea, as Chairman Bernanke put it in his post-FOMC press conference, is that "[b]y tying future monetary policy more explicitly to economic conditions, this formulation of our policy guidance should also make monetary policy more transparent and predictable to the public."
Now, on the one hand, you can't be any more clear than to say that the policy interest rate will remain near zero until such-and-such a date. But if you really want to know the "reaction function" that guides monetary policy decisions, date-based guidance isn't going to speak very clearly to this question. Rather, you would probably rather know the economic conditions that would warrant the FOMC's decision to adjust the policy rate.
Let me suggest that clear communication is one of the foundations of good monetary policy because it's one of the foundational characteristics of good money.
A textbook description of money is usually just a recitation of its functions—it acts as a store of value, a medium of exchange, and a unit of account. This definition of money is a rather hollow one (as Minneapolis Fed President Narayana Kocherlakota noted back in his academic days) because it tells us only what money does but doesn't speak to the core issue—what is the problem that money solves?
The "unit of account" function, in particular, gets little development in the textbooks and has generally not carried much weight in the academic literature on the theory of money. (There are a few exceptions, like this NBER working paper by Matthias Doepke and Martin Schneider.) But if people are going to communicate with one another about value, those communications are going to be most effective if done using some standardized metric—and that's where money comes in. As a "unit of account," our money is how we communicate about value. It can be a physical thing, like a particular commodity, or it can be an abstract concept, like the broad purchasing power of a medium of exchange.
But this isn't to imply that all things are equally up to the job of being a good unit of account. Many economists, beginning with Adam Smith, have been critical of commodity-based monetary systems in this regard. In Congressional testimony in 1922 about stabilizing the purchasing power of our money, famed economist Irving Fisher argued that while gold may have been chosen as our money because it was a good medium of exchange, it had proven to be a poor choice as a unit of account on which contracts could be negotiated. Indeed, he argued for a system where the value of money was fixed in terms of a statistical index of its broad purchasing power, a system certainly similar in spirit to the one the Federal Reserve pursues today:
Is it not absurd to have a dollar also a unit in weight, when it is not intended to measure weight, but is intended to measure purchasing power. It is used in commerce in buying and selling, by debtor and creditor for lending and repaying; and we propose that the repayment shall be just. What does that mean? It does not mean that you shall return a given weight of gold or a given weight of anything; it means that you shall return to the lender something that is a just equivalent. Value is involved in there, and value is statistically increased by an index number average purchasing power.
In other words, it's essential that the unit of account conveys value so that the units expressed in trade, contracts, and financial accounts are both meaningful and durable. We recently produced a simple four-minute video on the subject. Give it a view and let us know what you think. We're big on getting our communications right.
By Mike Bryan, vice president and senior economist in the Atlanta Fed's research department
December 27, 2013
Is the Labor Force Participation Rate about to Fall Again?
A few posts back my Atlanta Fed colleagues Tim Dunne and Ellie Terry offered up our latest contribution to the ongoing head-scratching over the rather spectacular decline in U.S. labor force participation (LFP) since the onset of the Great Recession in December 2007. “Rather spectacular” in this case means a fall in the participation rate from 66 percent (of the working age population either working or actively seeking work) to the 63 percent level reported for November. In people terms, that 3 percentage point decline represents a reduction of about 1.4 million participants in the U.S. labor market.
Like many other analysts, Dunne and Terry find that the drop in labor force participation appears to come from a combination of demographic factors—mainly the aging of the population—and other causes not specifically identified but generally interpreted to be associated with the weak economy in one way or another.
Two developing stories suggest the LFP may not be leaving the spotlight just yet. The first is this one, from USA Today:
Some 1.3 million Americans are set to lose their unemployment benefits Saturday...
Federal emergency benefits will end when funds run out for a program created during the recession to supplement the benefits that states provide. The cutoff will initially affect 1.3 million people, but 1.9 million more will lose benefits by mid-2014 when their 26 weeks of state paychecks run out, according to the National Employment Law Project.
What will those 1.3 million Americans do when their benefits run dry? According to a recent study by Princeton University’s Henry Farber and the San Francisco Fed’s Robert Valletta—also presented at a conference hosted here at the Atlanta Fed in October—on balance, the affected individuals are likely to leave the labor force:
We examined the impact of the unprecedented extensions of UI [unemployment insurance] benefits in the United States over the past few years on unemployment dynamics and duration and compared their effects with the extension of UI benefits in the milder recession of the early 2000s. We found small but statistically significant reductions in unemployment exits and small increases in unemployment durations arising from both sets of UI extensions. The magnitude of these overall effects is similar across the two episodes...
We find that the effect on exit from unemployment occurs primarily through a reduction in labor force exits rather than through exit to employment (job finding). This is important because it implies that extended benefits do not delay the time to re-employment substantially and so do not have first-order efficiency effects. The major effect of extended benefits is redistributive, providing income to job losers who would have exited the labor force otherwise (consistent with Card et al. 2007). [link mine]
In other words, if a significant decline in unemployment benefits comes to pass, we may well see another bump downward in the labor force participation rate. Although a decline in LFP associated with the expiration of extended UI benefits would fall in Dunne and Terry’s nondemographic category, the Farber and Valletta results suggest that we should interpret any such decline as structural. And structural in this case means not directly amenable to correction by policies aimed at stimulating spending.
The other important piece of recent news, however, is this one, which you probably heard about:
According to the Bureau of Economic Analysis, real gross domestic product—output produced in the United States—actually grew at a rate of 4.1% in the third quarter, up from BEA’s previous estimate of a 3.6% growth rate. The final results are also a gain over the second quarter’s 2.5% GDP growth.
Furthermore, as noted at Calculated Risk, the good news doesn’t stop there:
A little Christmas cheer...
Macroeconomic Advisers...[raised] its estimate for fourth-quarter growth. It now forecasts gross domestic product to expand at an annualized rate of 2.6% in the final three months of the year, up three-tenths of a percentage point from an earlier estimate.
And Goldman Sachs has increased their Q4 GDP tracking to 2.4% annualized growth.
That all adds up to pretty decent growth in the second half of the year. If it persists, and the long-awaited acceleration in the economic expansion finally arrives, better labor market conditions should follow. And if the six-year fall in LFP has in large measure been driven by weak economic conditions, we should at least see a pause in participation declines as economic activity picks up. Actually, we should probably see an outright increase.
The next several quarters, then, may well provide some clarity as to the persistent question of whether or not the large recent exodus of Americans from the labor force has been the result of a lackluster economy. In this period, we may get some clarity as to whether efforts to stem that exodus were justified by a correct diagnosis of the underlying cause.
By Dave Altig, executive vice president and research director at the Atlanta Fed
December 23, 2013
Goodwill to Man
By pure coincidence, two interviews with Pennsylvania State University professor Neil Wallace have been published in recent weeks. One is in the December issue of the Federal Reserve Bank of Minneapolis’ excellent Region magazine. The other, conducted by Chicago Fed economist Ed Nosal and yours truly, is slated for the journal Macroeconomic Dynamics and is now available as a Federal Reserve Bank of Chicago working paper.
If you have any interest at all in the history of monetary theory over the past 40 years or so, I highly recommend to you these conversations. As Ed and I note of Professor Wallace in our introductory comments, very few people have such a coherent view of their own intellectual history, and fewer still have lived that history in such a remarkably consequential period for their chosen field.
Perhaps my favorite part of our interview was the following, where Professor Wallace reveals how he thinks about teaching economics, and macroeconomics specifically (link added):
If we were to construct an economics curriculum, independent of where we’ve come from, then what would it look like? The first physics I ever saw was in high school... I can vaguely remember something about frictionless inclined planes, and stuff like that. So that is what a first physics course is; it is Newtonian mechanics. So what do we have in economics that is the analogue of Newtonian mechanics? I would say it is the Arrow-Debreu general competitive model. So that might be a starting point. At the undergraduate level, do we ever actually teach that model?
[Interviewers] That means that you would not talk about money in your first course.
That is right. Suppose we taught the Arrow-Debreu model. Then at the end we’d have to say that this model has certain shortcomings. First of all, the equilibrium concept is a little hokey. It’s not a game, which is to say there are no outcomes associated with other than equilibrium choices. And second, where do the prices come from? You’d want to point out that the prices in the Arrow-Debreu model are not the prices you see in the supermarket because there’s no one in the model writing down the prices. That might take you to strategic models of trade. You would also want to point out that there are a lot of serious things in the world that we think we see that aren’t in the model: unemployment, money, and [an interesting notion of] firms aren’t in the Arrow-Debreu model. What else? Investing in innovation, which is critical to growth, isn’t in that model. Neither is asymmetric information. The curriculum, after this grounding in the analogue of Newtonian mechanics, which is the Arrow-Debreu model, would go into these other things. It would talk about departures from that theory to deal with such things; and it would describe unsolved problems.
So that’s a vision of a curriculum. Where would macro be? One way to think about macro is in terms of substantive issues. From that point of view, most of us would say macro is about business cycles and growth. Viewed in terms of the curriculum I outlined, business cycles and growth would be among the areas that are not in the Arrow-Debreu model. You can talk about attempts to shove them in the model, and why they fall short, and what else you can do.
Of the many things that I have learned from Professor Wallace, this one comes back to me again and again: Talk about how to get the things in the model that are essential to dealing with the unsolved problems, honestly assess why they fall short, and explore what else you can do. To me, this is not only a message of good science. It is one of intellectual generosity, the currency of good citizenship.
I was recently asked whether I align with “freshwater” or “saltwater” economics (roughly, I guess, whether I think of myself as an Arrow-Debreu type or a New Keynesian type). There are many similar questions that come up. Are you a policy “hawk” or a policy “dove”? Do you believe in old monetarism (willing to write papers with reduced-form models of money demand) or new monetarism (requiring, for example, some explicit statement about the frictions, or deviations from Arrow-Debreu, that give rise to money’s existence)?
What I appreciate about the Wallace formulation is that it asks us to avoid thinking in these terms. There are problems to solve. The models that we bring to those problems are not true or false. They are all false, and we—in the academic world and in the policy world—are on a common journey to figure out what we are missing and what else we can do.
It is deeply misguided to treat models as if they are immutable truths. All good economists appreciate this intellectually. And yet there is an awful lot of energy wasted, especially in the blogosphere, on casting aspersions at those who are perceived to be seeking answers within other theoretical tribes.
Some problems are well-suited to Newtonian mechanics, some are not. Some amendments to Arrow-Debreu are useful; some are not. And what is well-suited or useful in some circumstances may well be ill-suited or even harmful in others. Perhaps if we all acknowledge that none of us knows which is which 100 percent of the time, we can make just a little more progress on all those unsolved problems in the coming year. At a minimum, we would air our disagreements with a lot more civility.
By Dave Altig, executive vice president and research director at the Atlanta Fed
December 19, 2013
Labor Force Participation Rates Revisited
In an earlier macroblog post, our colleague Julie Hotchkiss examined the decline in labor force participation from the onset of the Great Recession into early 2012, concluding that cyclical factors likely accounted for most of the drop. In this post, we examine how labor force participation has changed since the start of 2012 (and admittedly, we’re much less ambitious in our analysis than Julie). Motivating our analysis, in part, is the observation that much of the recent decline in the labor force participation rate (LFPR) is related to rising retirements (see the November 19 Research Rap by Shigeru Fujita). This is not surprising, as the percentage of individuals aged 65 and older in the population has been increasing sharply over the last half decade. That said, our approach indicates that the LFPR of prime-age workers (ages 25–54) continues to fall, and this is an important source of the overall decline in LFPR in the recent data. Such declines in LFPR in these age categories should be less related to retirement decisions, keeping on the table the possibility that a weak overall labor market remains a key drag on labor force participation.
A straightforward decomposition illustrates that the decline in LFPR among prime-age workers is a major contributor to the overall decline in LFPR. To see this, we separate the change in LFPR into three components: one that measures the change due to shifts in the LFPR within age groups—the within effect; one that measures changes due to population shifts across age groups—the between effect; and one that allows for correlation across the two effects—a covariance term. It works out the covariance term is always very close to zero, so we will omit discussion of that term here. The analysis breaks the data down into five age groups: 16–24, 25–34, 35–44, 45–54, and 55+.
The chart presents the decomposition from Q1 2012 to Q3 2013. Over this period, the overall LFPR declined by half a percentage point, from 63.8 percent to 63.3 percent. The blue areas represent the change due to within-age-group effects, and the green areas represent the change due to between-age-group effects. The sum of the bars is equal to the overall change in labor force participation.
Three key results emerge. First, increases in labor force participation for the youngest age group boosted overall labor force participation by 0.075 percentage points. Second, the growing population share of the 55+ age group reduced LFPRs over the period by 0.21 percentage points, accounting for roughly 40 percent of the overall decline. Third, labor force participation for prime-age workers continued to fall. The combined within effect for the prime-age individuals (25–34, 35–44, and 45–54) reduced the participation rate by 0.28 percentage points—or a little over half of the overall decline in labor force participation. Additional declines in labor force participation were associated with the reduction in population shares of prime age workers.
From an accounting standpoint, the analysis shows that the fall in the LFPR for prime-age workers is a main contributing factor to the recent decline in labor force participation. Indeed, the LFPR of prime-age workers fell from 81.6 to 81.0 from Q1 2012 to Q3 2013, with similar declines for both men and women. Given that prime-age workers make up more than half of the population, it is not surprising that the drop in the LFPR for these age groups accounts for a substantial fraction of the overall decline.
To put this in perspective, we present the same decomposition from Q1 2010 to Q4 2011, where the decline in the LFPR is 0.8 percentage point. While the magnitude of the overall change is different, the decomposition results are quite similar. The decline in participation rates for prime-age workers accounts for a little over 60 percent of the overall decline, with a substantial drag from the rise in the share of older workers (accounting for a third of the drop). In short, the changes in participation due to within and between effects over the first two years look quite similar to that of the second two years of the labor market recovery.
A corollary to this analysis is that these sources of decline in labor force participation have allowed the unemployment rate to decline more sharply than expected, given the moderate employment growth observed. We will not take a stand on whether these are “wrong” or “right” reasons for unemployment rate declines. Rather, we note that the patterns observed early in the recovery are still in place (more or less) in the recent data.
By Timothy Dunne, a research economist and policy adviser,
and Ellie Terry, an economic policy analysis specialist, both in the research department of the Atlanta Fed
December 04, 2013
Is (Risk) Sharing Always a Virtue?
The financial system cannot be made completely safe because it exists to allocate funds to inherently risky projects in the real economy. Thus, an important question for policymakers is how best to structure the financial system to absorb these losses while minimizing the risk that financial sector failures will impair the real economy.
Standard theories would predict that one good way of reducing financial sector risk is diversification. For example, the financial system could be structured to facilitate the development of large banks, a point often made by advocates for big banks such as Steve Bartlett. Another, not mutually exclusive, way of enhancing diversification is to create a system that shares risks across banks. An example is the Dodd-Frank Act mandate requiring formerly over-the-counter derivatives transactions to be centrally cleared.
However, do these conclusions based on individual bank stability necessarily imply that risk sharing will make the financial system safer? Is it even relevant to the principal risks facing the financial system? Some of the papers presented at the recent Atlanta Fed conference, "Indices of Riskiness: Management and Regulatory Implications," broadly addressed these questions and others. Other papers discuss the impact of bank distress on local economies, methods of predicting bank failure, and various aspects of incentive compensation paid to bankers (which I discuss in a recent Notes from the Vault).
The stability implications of greater risk sharing across banks are explored in "Systemic Risk and Stability in Financial Networks" by Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. They develop a theoretical model of risk sharing in networks of banks. The most relevant comparison they draw is between what they call a "complete financial network" (maximum possible diversification) and a "weakly connected" network in which there is substantial risk sharing between pairs of banks but very little risk sharing outside the individual pairs. Consistent with the standard view of diversification, the complete networks experience few, if any, failures when individual banks are subject to small shocks, but some pairs of banks do fail in the weakly connected networks. However, at some point the losses become so large that the complete network undergoes a phase transition, spreading the losses in a way that causes the failure of more banks than would have occurred with less risk sharing.
Extrapolating from this paper, one could imagine that risk sharing could induce a false sense of security that would ultimately make a financial system substantially less stable. At first a more interconnected system shrugs off smaller shocks with seemingly no adverse impact. This leads bankers and policymakers to believe that the system can handle even more risk because it has become more stable. However, at some point the increased risk taking leads to losses sufficiently large to trigger a phase transition, and the system proves to be even less stable than it was with weaker interconnections.
While interconnections between financial firms are a theoretically important determinant of contagion, how important are these connections in practice? "Financial Firm Bankruptcy and Contagion," by Jean Helwege and Gaiyan Zhang, analyzes the spillovers from distressed and failing financial firms from 1980 to 2010. Looking at the financial firms that failed, they find that counterparty risk exposure (the interconnections) tend to be small, with no single exposure above $2 billion and the average a mere $53.4 million. They note that these small exposures are consistent with regulations that limit banks' exposure to any single counterparty. They then look at information contagion, in which the disclosure of distress at one financial firm may signal adverse information about the quality of a rival's assets. They find that the effect of these signals is comparable to that found for direct credit exposure.
Helwege and Zhang's results suggest that we should be at least as concerned about separate banks' exposure to an adverse shock that hits all of their assets as we should be about losses that are shared through bank networks. One possible common shock is the likely increase in the level and slope of the term structure as the Federal Reserve begins tapering its asset purchases and starts a process ultimately leading to the normalization of short-term interest rate setting. Although historical data cannot directly address banks' current exposure to such shocks, such data can provide evidence on banks' past exposure. William B. English, Skander J. Van den Heuvel, and Egon Zakrajšek presented evidence on this exposure in the paper "Interest Rate Risk and Bank Equity Valuations." They find a significant decrease in bank stock prices in response to an unexpected increase in the level or slope of the term structure. The response to slope increases (likely the primary effect of tapering) is somewhat attenuated at banks with large maturity gaps. One explanation for this finding is that these banks may partially recover their current losses with gains they will accrue when booking new assets (funded by shorter-term liabilities).
Overall, the papers presented in this part of the conference suggest that more risk sharing among financial institutions is not necessarily always better. Even though it may provide the appearance of increased stability in response to small shocks, the system is becoming less robust to larger shocks. However, it also suggests that shared exposures to a common risk are likely to present at least as an important a threat to financial stability as interconnections among financial firms, especially as the term structure and the overall economy respond to the eventual return to normal monetary policy. Along these lines, I recently offered some thoughts on how to reduce the risk of large widespread losses due to exposures to a common (credit) risk factor.
By Larry Wall, director of the Atlanta Fed's Center for Financial Innovation and Stability
Note: The conference "Indices of Riskiness: Management and Regulatory Implications" was organized by Glenn Harrison (Georgia State University's Center for the Economic Analysis of Risk), Jean-Charles Rochet, (University of Zurich), Markus Sticker, Dirk Tasche (Bank of England, Prudential Regulatory Authority), and Larry Wall (the Atlanta Fed's Center for Financial Innovation and Stability).
November 20, 2013
The Shadow Knows (the Fed Funds Rate)
The fed funds rate has been at the zero lower bound (ZLB) since the end of 2008. To provide a further boost to the economy, the Federal Open Market Committee (FOMC) has embarked on unconventional forms of monetary policy (a mix of forward guidance and large-scale asset purchases). This situation has created a bit of an issue for economic forecasters, who use models that attempt to summarize historical patterns and relationships.
The fed funds rate, which usually varies with economic conditions, has now been stuck at near zero for 20 quarters, damping its historical correlation with economic variables like real gross domestic product (GDP), the unemployment rate, and inflation. As a result, forecasts that stem from these models may not be useful or meaningful even after policy has normalized.
A related issue for forecasters of the ZLB period is how to characterize unconventional monetary policy in a meaningful way inside their models. Attempts to summarize current policy have led some forecasters to create a "virtual" fed funds rate, as originally proposed by Chung et al. and incorporated by us in this macroblog post. This approach uses a conversion factor to translate changes in the Fed's balance sheet into fed funds rate equivalents. However, it admits no role for forward guidance, which is one of the primary tools the FOMC is currently using.
So what's a forecaster to do? Thankfully, Jim Hamilton over at Econbrowser has pointed to a potential patch. However, this solution carries with it a nefarious-sounding moniker—the shadow rate—which calls to mind a treacherous journey deep within the hinterlands of financial economics, fraught with pitfalls and danger.
The shadow rate can be negative at the ZLB; it is estimated using Treasury forward rates out to a 10-year horizon. Fortunately we don't need to take a jaunt into the hinterlands, because the paper's authors, Cynthia Wu and Dora Xia, have made their shadow rate publicly available. In fact, they write that all researchers have to do is "...update their favorite [statistical model] using the shadow rate for the ZLB period."
That's just what we did. We took five of our favorite models (Bayesian vector autoregressions, or BVARs) and spliced in the shadow rate starting in 1Q 2009. The shadow rate is currently hovering around minus 2 percent, suggesting a more accommodative environment than what the effective fed funds rate (stuck around 15 basis points) can deliver. Given the extra policy accommodation, we'd expect to see a bit more growth and a lower unemployment rate when using the shadow rate.
Before showing the average forecasts that come out of our models, we want to point out a few things. First, these are merely statistical forecasts and not the forecast that our boss brings with him to FOMC meetings. Second, there are alternative shadow rates out there. In fact, St. Louis Fed President James Bullard mentioned another one about a year ago based on work by Leo Krippner. At the time, that shadow rate was around minus 5 percent, much further below Wu and Xia's shadow rate (which was around minus 1.2 percent at the end of last year). Considering the disagreement between the two rates, we might want to take these forecasts with a grain of salt.
Caveats aside, we get a somewhat stronger path for real GDP growth and a lower unemployment rate path, consistent with what we'd expect additional stimulus to do. However, our core personal consumption expenditures inflation forecast seems to still be suffering from the dreaded price-puzzle. (We Googled it for you.)
Perhaps more important, the fed funds projections that emerge from this model appear to be much more believable. Rather than calling for an immediate liftoff, as the standard approach does, the average forecast of the shadow rate doesn't turn positive until the second half of 2015. This is similar to the most recent Wall Street Journal poll of economic forecasters, and the September New York Fed survey of primary dealers. The median respondent to that survey expects the first fed funds increase to occur in the third quarter of 2015. The shadow rate forecast has the added benefit of not being at odds with the current threshold-based guidance discussed in today's release of the minutes from the FOMC's October meeting.
Moreover, today's FOMC minutes stated, "modifications to the forward guidance for the federal funds rate could be implemented in the future, either to improve clarity or to add to policy accommodation, perhaps in conjunction with a reduction in the pace of asset purchases as part of a rebalancing of the Committee's tools." In this event, the shadow rate might be a useful scorecard for measuring the total effect of these policy actions.
It seems that if you want to summarize the stance of policy right now, just maybe...the shadow knows.
By Pat Higgins, senior economist, and
Brent Meyer, research economist, both of the Atlanta Fed's research department