December 23, 2013
Goodwill to Man
By pure coincidence, two interviews with Pennsylvania State University professor Neil Wallace have been published in recent weeks. One is in the December issue of the Federal Reserve Bank of Minneapolis’ excellent Region magazine. The other, conducted by Chicago Fed economist Ed Nosal and yours truly, is slated for the journal Macroeconomic Dynamics and is now available as a Federal Reserve Bank of Chicago working paper.
If you have any interest at all in the history of monetary theory over the past 40 years or so, I highly recommend to you these conversations. As Ed and I note of Professor Wallace in our introductory comments, very few people have such a coherent view of their own intellectual history, and fewer still have lived that history in such a remarkably consequential period for their chosen field.
Perhaps my favorite part of our interview was the following, where Professor Wallace reveals how he thinks about teaching economics, and macroeconomics specifically (link added):
If we were to construct an economics curriculum, independent of where we’ve come from, then what would it look like? The first physics I ever saw was in high school... I can vaguely remember something about frictionless inclined planes, and stuff like that. So that is what a first physics course is; it is Newtonian mechanics. So what do we have in economics that is the analogue of Newtonian mechanics? I would say it is the Arrow-Debreu general competitive model. So that might be a starting point. At the undergraduate level, do we ever actually teach that model?
[Interviewers] That means that you would not talk about money in your first course.
That is right. Suppose we taught the Arrow-Debreu model. Then at the end we’d have to say that this model has certain shortcomings. First of all, the equilibrium concept is a little hokey. It’s not a game, which is to say there are no outcomes associated with other than equilibrium choices. And second, where do the prices come from? You’d want to point out that the prices in the Arrow-Debreu model are not the prices you see in the supermarket because there’s no one in the model writing down the prices. That might take you to strategic models of trade. You would also want to point out that there are a lot of serious things in the world that we think we see that aren’t in the model: unemployment, money, and [an interesting notion of] firms aren’t in the Arrow-Debreu model. What else? Investing in innovation, which is critical to growth, isn’t in that model. Neither is asymmetric information. The curriculum, after this grounding in the analogue of Newtonian mechanics, which is the Arrow-Debreu model, would go into these other things. It would talk about departures from that theory to deal with such things; and it would describe unsolved problems.
So that’s a vision of a curriculum. Where would macro be? One way to think about macro is in terms of substantive issues. From that point of view, most of us would say macro is about business cycles and growth. Viewed in terms of the curriculum I outlined, business cycles and growth would be among the areas that are not in the Arrow-Debreu model. You can talk about attempts to shove them in the model, and why they fall short, and what else you can do.
Of the many things that I have learned from Professor Wallace, this one comes back to me again and again: Talk about how to get the things in the model that are essential to dealing with the unsolved problems, honestly assess why they fall short, and explore what else you can do. To me, this is not only a message of good science. It is one of intellectual generosity, the currency of good citizenship.
I was recently asked whether I align with “freshwater” or “saltwater” economics (roughly, I guess, whether I think of myself as an Arrow-Debreu type or a New Keynesian type). There are many similar questions that come up. Are you a policy “hawk” or a policy “dove”? Do you believe in old monetarism (willing to write papers with reduced-form models of money demand) or new monetarism (requiring, for example, some explicit statement about the frictions, or deviations from Arrow-Debreu, that give rise to money’s existence)?
What I appreciate about the Wallace formulation is that it asks us to avoid thinking in these terms. There are problems to solve. The models that we bring to those problems are not true or false. They are all false, and we—in the academic world and in the policy world—are on a common journey to figure out what we are missing and what else we can do.
It is deeply misguided to treat models as if they are immutable truths. All good economists appreciate this intellectually. And yet there is an awful lot of energy wasted, especially in the blogosphere, on casting aspersions at those who are perceived to be seeking answers within other theoretical tribes.
Some problems are well-suited to Newtonian mechanics, some are not. Some amendments to Arrow-Debreu are useful; some are not. And what is well-suited or useful in some circumstances may well be ill-suited or even harmful in others. Perhaps if we all acknowledge that none of us knows which is which 100 percent of the time, we can make just a little more progress on all those unsolved problems in the coming year. At a minimum, we would air our disagreements with a lot more civility.
By Dave Altig, executive vice president and research director at the Atlanta Fed
December 19, 2013
Labor Force Participation Rates Revisited
In an earlier macroblog post, our colleague Julie Hotchkiss examined the decline in labor force participation from the onset of the Great Recession into early 2012, concluding that cyclical factors likely accounted for most of the drop. In this post, we examine how labor force participation has changed since the start of 2012 (and admittedly, we’re much less ambitious in our analysis than Julie). Motivating our analysis, in part, is the observation that much of the recent decline in the labor force participation rate (LFPR) is related to rising retirements (see the November 19 Research Rap by Shigeru Fujita). This is not surprising, as the percentage of individuals aged 65 and older in the population has been increasing sharply over the last half decade. That said, our approach indicates that the LFPR of prime-age workers (ages 25–54) continues to fall, and this is an important source of the overall decline in LFPR in the recent data. Such declines in LFPR in these age categories should be less related to retirement decisions, keeping on the table the possibility that a weak overall labor market remains a key drag on labor force participation.
A straightforward decomposition illustrates that the decline in LFPR among prime-age workers is a major contributor to the overall decline in LFPR. To see this, we separate the change in LFPR into three components: one that measures the change due to shifts in the LFPR within age groups—the within effect; one that measures changes due to population shifts across age groups—the between effect; and one that allows for correlation across the two effects—a covariance term. It works out the covariance term is always very close to zero, so we will omit discussion of that term here. The analysis breaks the data down into five age groups: 16–24, 25–34, 35–44, 45–54, and 55+.
The chart presents the decomposition from Q1 2012 to Q3 2013. Over this period, the overall LFPR declined by half a percentage point, from 63.8 percent to 63.3 percent. The blue areas represent the change due to within-age-group effects, and the green areas represent the change due to between-age-group effects. The sum of the bars is equal to the overall change in labor force participation.
Three key results emerge. First, increases in labor force participation for the youngest age group boosted overall labor force participation by 0.075 percentage points. Second, the growing population share of the 55+ age group reduced LFPRs over the period by 0.21 percentage points, accounting for roughly 40 percent of the overall decline. Third, labor force participation for prime-age workers continued to fall. The combined within effect for the prime-age individuals (25–34, 35–44, and 45–54) reduced the participation rate by 0.28 percentage points—or a little over half of the overall decline in labor force participation. Additional declines in labor force participation were associated with the reduction in population shares of prime age workers.
From an accounting standpoint, the analysis shows that the fall in the LFPR for prime-age workers is a main contributing factor to the recent decline in labor force participation. Indeed, the LFPR of prime-age workers fell from 81.6 to 81.0 from Q1 2012 to Q3 2013, with similar declines for both men and women. Given that prime-age workers make up more than half of the population, it is not surprising that the drop in the LFPR for these age groups accounts for a substantial fraction of the overall decline.
To put this in perspective, we present the same decomposition from Q1 2010 to Q4 2011, where the decline in the LFPR is 0.8 percentage point. While the magnitude of the overall change is different, the decomposition results are quite similar. The decline in participation rates for prime-age workers accounts for a little over 60 percent of the overall decline, with a substantial drag from the rise in the share of older workers (accounting for a third of the drop). In short, the changes in participation due to within and between effects over the first two years look quite similar to that of the second two years of the labor market recovery.
A corollary to this analysis is that these sources of decline in labor force participation have allowed the unemployment rate to decline more sharply than expected, given the moderate employment growth observed. We will not take a stand on whether these are “wrong” or “right” reasons for unemployment rate declines. Rather, we note that the patterns observed early in the recovery are still in place (more or less) in the recent data.
By Timothy Dunne, a research economist and policy adviser,
and Ellie Terry, an economic policy analysis specialist, both in the research department of the Atlanta Fed
December 04, 2013
Is (Risk) Sharing Always a Virtue?
The financial system cannot be made completely safe because it exists to allocate funds to inherently risky projects in the real economy. Thus, an important question for policymakers is how best to structure the financial system to absorb these losses while minimizing the risk that financial sector failures will impair the real economy.
Standard theories would predict that one good way of reducing financial sector risk is diversification. For example, the financial system could be structured to facilitate the development of large banks, a point often made by advocates for big banks such as Steve Bartlett. Another, not mutually exclusive, way of enhancing diversification is to create a system that shares risks across banks. An example is the Dodd-Frank Act mandate requiring formerly over-the-counter derivatives transactions to be centrally cleared.
However, do these conclusions based on individual bank stability necessarily imply that risk sharing will make the financial system safer? Is it even relevant to the principal risks facing the financial system? Some of the papers presented at the recent Atlanta Fed conference, "Indices of Riskiness: Management and Regulatory Implications," broadly addressed these questions and others. Other papers discuss the impact of bank distress on local economies, methods of predicting bank failure, and various aspects of incentive compensation paid to bankers (which I discuss in a recent Notes from the Vault).
The stability implications of greater risk sharing across banks are explored in "Systemic Risk and Stability in Financial Networks" by Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. They develop a theoretical model of risk sharing in networks of banks. The most relevant comparison they draw is between what they call a "complete financial network" (maximum possible diversification) and a "weakly connected" network in which there is substantial risk sharing between pairs of banks but very little risk sharing outside the individual pairs. Consistent with the standard view of diversification, the complete networks experience few, if any, failures when individual banks are subject to small shocks, but some pairs of banks do fail in the weakly connected networks. However, at some point the losses become so large that the complete network undergoes a phase transition, spreading the losses in a way that causes the failure of more banks than would have occurred with less risk sharing.
Extrapolating from this paper, one could imagine that risk sharing could induce a false sense of security that would ultimately make a financial system substantially less stable. At first a more interconnected system shrugs off smaller shocks with seemingly no adverse impact. This leads bankers and policymakers to believe that the system can handle even more risk because it has become more stable. However, at some point the increased risk taking leads to losses sufficiently large to trigger a phase transition, and the system proves to be even less stable than it was with weaker interconnections.
While interconnections between financial firms are a theoretically important determinant of contagion, how important are these connections in practice? "Financial Firm Bankruptcy and Contagion," by Jean Helwege and Gaiyan Zhang, analyzes the spillovers from distressed and failing financial firms from 1980 to 2010. Looking at the financial firms that failed, they find that counterparty risk exposure (the interconnections) tend to be small, with no single exposure above $2 billion and the average a mere $53.4 million. They note that these small exposures are consistent with regulations that limit banks' exposure to any single counterparty. They then look at information contagion, in which the disclosure of distress at one financial firm may signal adverse information about the quality of a rival's assets. They find that the effect of these signals is comparable to that found for direct credit exposure.
Helwege and Zhang's results suggest that we should be at least as concerned about separate banks' exposure to an adverse shock that hits all of their assets as we should be about losses that are shared through bank networks. One possible common shock is the likely increase in the level and slope of the term structure as the Federal Reserve begins tapering its asset purchases and starts a process ultimately leading to the normalization of short-term interest rate setting. Although historical data cannot directly address banks' current exposure to such shocks, such data can provide evidence on banks' past exposure. William B. English, Skander J. Van den Heuvel, and Egon Zakrajšek presented evidence on this exposure in the paper "Interest Rate Risk and Bank Equity Valuations." They find a significant decrease in bank stock prices in response to an unexpected increase in the level or slope of the term structure. The response to slope increases (likely the primary effect of tapering) is somewhat attenuated at banks with large maturity gaps. One explanation for this finding is that these banks may partially recover their current losses with gains they will accrue when booking new assets (funded by shorter-term liabilities).
Overall, the papers presented in this part of the conference suggest that more risk sharing among financial institutions is not necessarily always better. Even though it may provide the appearance of increased stability in response to small shocks, the system is becoming less robust to larger shocks. However, it also suggests that shared exposures to a common risk are likely to present at least as an important a threat to financial stability as interconnections among financial firms, especially as the term structure and the overall economy respond to the eventual return to normal monetary policy. Along these lines, I recently offered some thoughts on how to reduce the risk of large widespread losses due to exposures to a common (credit) risk factor.
By Larry Wall, director of the Atlanta Fed's Center for Financial Innovation and Stability
Note: The conference "Indices of Riskiness: Management and Regulatory Implications" was organized by Glenn Harrison (Georgia State University's Center for the Economic Analysis of Risk), Jean-Charles Rochet, (University of Zurich), Markus Sticker, Dirk Tasche (Bank of England, Prudential Regulatory Authority), and Larry Wall (the Atlanta Fed's Center for Financial Innovation and Stability).
November 20, 2013
The Shadow Knows (the Fed Funds Rate)
The fed funds rate has been at the zero lower bound (ZLB) since the end of 2008. To provide a further boost to the economy, the Federal Open Market Committee (FOMC) has embarked on unconventional forms of monetary policy (a mix of forward guidance and large-scale asset purchases). This situation has created a bit of an issue for economic forecasters, who use models that attempt to summarize historical patterns and relationships.
The fed funds rate, which usually varies with economic conditions, has now been stuck at near zero for 20 quarters, damping its historical correlation with economic variables like real gross domestic product (GDP), the unemployment rate, and inflation. As a result, forecasts that stem from these models may not be useful or meaningful even after policy has normalized.
A related issue for forecasters of the ZLB period is how to characterize unconventional monetary policy in a meaningful way inside their models. Attempts to summarize current policy have led some forecasters to create a "virtual" fed funds rate, as originally proposed by Chung et al. and incorporated by us in this macroblog post. This approach uses a conversion factor to translate changes in the Fed's balance sheet into fed funds rate equivalents. However, it admits no role for forward guidance, which is one of the primary tools the FOMC is currently using.
So what's a forecaster to do? Thankfully, Jim Hamilton over at Econbrowser has pointed to a potential patch. However, this solution carries with it a nefarious-sounding moniker—the shadow rate—which calls to mind a treacherous journey deep within the hinterlands of financial economics, fraught with pitfalls and danger.
The shadow rate can be negative at the ZLB; it is estimated using Treasury forward rates out to a 10-year horizon. Fortunately we don't need to take a jaunt into the hinterlands, because the paper's authors, Cynthia Wu and Dora Xia, have made their shadow rate publicly available. In fact, they write that all researchers have to do is "...update their favorite [statistical model] using the shadow rate for the ZLB period."
That's just what we did. We took five of our favorite models (Bayesian vector autoregressions, or BVARs) and spliced in the shadow rate starting in 1Q 2009. The shadow rate is currently hovering around minus 2 percent, suggesting a more accommodative environment than what the effective fed funds rate (stuck around 15 basis points) can deliver. Given the extra policy accommodation, we'd expect to see a bit more growth and a lower unemployment rate when using the shadow rate.
Before showing the average forecasts that come out of our models, we want to point out a few things. First, these are merely statistical forecasts and not the forecast that our boss brings with him to FOMC meetings. Second, there are alternative shadow rates out there. In fact, St. Louis Fed President James Bullard mentioned another one about a year ago based on work by Leo Krippner. At the time, that shadow rate was around minus 5 percent, much further below Wu and Xia's shadow rate (which was around minus 1.2 percent at the end of last year). Considering the disagreement between the two rates, we might want to take these forecasts with a grain of salt.
Caveats aside, we get a somewhat stronger path for real GDP growth and a lower unemployment rate path, consistent with what we'd expect additional stimulus to do. However, our core personal consumption expenditures inflation forecast seems to still be suffering from the dreaded price-puzzle. (We Googled it for you.)
Perhaps more important, the fed funds projections that emerge from this model appear to be much more believable. Rather than calling for an immediate liftoff, as the standard approach does, the average forecast of the shadow rate doesn't turn positive until the second half of 2015. This is similar to the most recent Wall Street Journal poll of economic forecasters, and the September New York Fed survey of primary dealers. The median respondent to that survey expects the first fed funds increase to occur in the third quarter of 2015. The shadow rate forecast has the added benefit of not being at odds with the current threshold-based guidance discussed in today's release of the minutes from the FOMC's October meeting.
Moreover, today's FOMC minutes stated, "modifications to the forward guidance for the federal funds rate could be implemented in the future, either to improve clarity or to add to policy accommodation, perhaps in conjunction with a reduction in the pace of asset purchases as part of a rebalancing of the Committee's tools." In this event, the shadow rate might be a useful scorecard for measuring the total effect of these policy actions.
It seems that if you want to summarize the stance of policy right now, just maybe...the shadow knows.
By Pat Higgins, senior economist, and
Brent Meyer, research economist, both of the Atlanta Fed's research department
November 15, 2013
Is Credit to Small Businesses Flowing Faster? Evidence from the Atlanta Fed Small Business Survey
The spigot of credit to small businesses appears to be turning faster. As of June 2013, outstanding amounts of small loans on the balance sheets of banks were 4 percent higher than their September 2012 levels, according to the Federal Deposit Insurance Corporation. While they are still 12 percent off 2007 levels, the recent increase is encouraging.
The turnaround in small loan portfolios is not the only sign of improved credit flows to small businesses. The Fed’s October 2013 senior loan officer survey indicates that credit terms to small firms have gradually eased since the second quarter of 2010. Approval ratings of banks and alternative lenders, as measured by Biz2Credit’s lending index, have also risen steadily over the past two years.
In addition to these positive signs, the Atlanta Fed’s third-quarter 2013 Small Business Survey has revealed signs of improvement among small business borrowers in the Southeast. The survey asked recent borrowers about their requests for credit and how successful they were at each place they applied. We also asked, “Over ALL your applications for credit, to what extent were you total financing needs met?” This measure of overall financing satisfaction showed some signs of improvement in the third quarter.
Chart 1 compares the overall financing satisfaction of small business borrowers in the first and third quarter of 2013. The portion of firms that received the full amount requested rose from 28 percent in the first quarter to 42 percent in the third quarter. Meanwhile, the portion that received none of the credit requested declined from 31 percent of the sample in the first quarter to 22 percent in the third quarter.
Further, financing satisfaction rose across a variety of dimensions. Chart 2 shows how average financing satisfaction changed for young firms and mature firms, across industries and by recent sales performance. In all cases, there were increases in the average amount of financing received from the first to the third quarter of 2013.
This broad-based increase in overall financing satisfaction is encouraging. Greater financial health of the applicant pool helped fuel the improvement in borrowing conditions. In the October survey, 52 percent of businesses reported that sales increased while 34 percent reported decreases. Sales have improved significantly from a year ago, when about as many firms reported sales increases as reported decreases. Measures of hiring and capital improvements over the year have also improved for the average firm in the survey (see chart 3).
Lending standards have been improving and small businesses have been slowly gaining momentum, but many obstacles remain. Open-ended questions in our survey revealed that small businesses are still concerned about a number of factors, including the general political and economic uncertainty, the impact of the Affordable Care Act, the higher collateral and personal guarantees required to obtain financing, and regulatory requirements that restrict lending. So while conditions on the ground seem to be improving for small businesses, there still appear to be headwinds that may be holding back a greater pace of improvement.
By Ellie Terry, an economic policy analysis specialist in the Atlanta Fed’s research department
November 14, 2013
Atlanta Fed's Jobs Calculator Drills Down to the States
In March 2012, the Federal Reserve Bank of Atlanta launched its Jobs Calculator, an application that illustrates the relationship between the unemployment rate, growth in payroll employment, the labor force participation rate, and a few other variables to boot. Most notably, it tells us how many jobs need to be created to achieve a specific unemployment rate within a given period of time. This tool has turned out to be a useful one for anchoring discussions about national employment growth and unemployment among policy makers and the media.
However, the national employment situation masks significant differences in state labor markets. For example, at the trough of the business cycle (June 2009), the national unemployment rate was 9.5 percent, but it ranged from 4.2 percent in North Dakota to 15.2 percent in Michigan. State policy makers, in managing the dynamics of their own employment situation, need to know the data on a state level.
We are pleased to announce that the Atlanta Fed recently unveiled the state-level Jobs Calculator. The same tool that has been used for national discussions is now available for state-level analyses (see the figure below).
Not only does this state tab allow a quick overview of the historical employment growth in each state (see, for example, Alabama's historical employment growth in the figure below), but it also has the same functionality as the national Jobs Calculator. (Because of the recent partial government shutdown, the data are updated only through August; state-level employment data for October will be available November 22.)
Like the national Jobs Calculator, the state-level version allows the user to input a target unemployment rate, choose the number of months desired to hit the target rate, and find out how many new jobs are required per month to get there. But the calculator is flexible enough to allow other interesting experiments as well.
Consider the case of Florida. During the recession, Florida experienced a significant decline in its population growth. It has gone from a high of about 0.2 percent growth per month (roughly 2.4 percent per year) to its current 0.115 percent growth per month (about 1.38 percent per year; see the figure below). Suppose policy makers in Florida want to know how a return to prerecession population growth might affect the number of jobs needed to maintain its current unemployment rate over the next 12 months. (Note that as of August, the unemployment rate in Florida was 7 percent.)
The calculator's default settings always answer the question, “How many jobs per month does it take to maintain today's unemployment rate over the next 12 months?” To answer our hypothetical policy makers' question, all they would have to do is enter a prerecession monthly population growth rate of 0.2 percent into Florida's state Jobs Calculator, leaving everything else the same. Given the current data in hand, we would discover that Florida would need to generate about 6,000 more jobs per month at the higher population growth than at the current—and lower—population growth to stabilize the unemployment rate at 7 percent.
The data behind the state-level Jobs Calculator come from the U.S. Census Bureau's Establishment Survey, the same data used for the national Jobs Calculator, combined with the Local Area Unemployment Statistics (LAUS) programs run by each state. The LAUS contain the regional and state employment statistics that are consistent with data from the Census Bureau's Current Population Survey. State-level population estimates are provided by the U.S. Census Bureau (and are described in more detail here). You'll note that the LAUS data, especially for very small states, look more erratic than national or larger states' numbers—the unfortunate consequence of small sample sizes.
LAUS data are generally issued about the third Friday of each month following the reference month, which means that the state-level Jobs Calculator statistics will be updated about two weeks after the national Jobs Calculator. The schedule of release dates is available from the U.S. Bureau of Labor Statistics.
By Julie Hotchkiss, a research economist and policy adviser in the Atlanta Fed's research department
November 12, 2013
The End of Asset Purchases: Is That the Big Question?
Last Friday, Atlanta Fed President Dennis Lockhart delivered a speech at the University of Mississippi, the bottom line of which was reported by the Wall Street Journal's Michael Derby:
Federal Reserve Bank of Atlanta President Dennis Lockhart said Friday that central bank policy must remain very easy for some time to come, although he cautioned the exact mix of tools employed by the central bank will change over time...
"Monetary policy overall should remain very accommodative for quite some time," Mr. Lockhart said... "The mix of tools we use to provide ongoing monetary stimulus may change, but any changes will not represent a fundamental shift of policy"...
That's a pretty accurate summary, but Derby follows up with commentary that feels somewhat less accurate:
The big question about Fed policy is what the central bank does with its $85 billion-per-month bond-buying program. It had widely been expected to start slowing the pace of purchases starting in September, but when it didn't do that, expectations went into flux. Ahead of the jobs data Friday, many forecasters had gravitated to the view bond buying would be trimmed some time next spring. Now, a number of forecasters said the risk of the Fed slowing its asset buying sooner has risen.
Now, the views that I express here are not necessarily those of the Federal Reserve Bank of Atlanta. But in this case, I think I can fairly claim that what President Lockhart was saying was that the big question is not "what the central bank does with its $85 billion-per-month bond-buying program." The following part of President Lockhart's speech—reiterated today in a speech in Montgomery, Alabama—is worth emphasizing:
The FOMC [Federal Open Market Committee] is currently using two tools to maintain the desired degree of monetary accommodation—the policy interest rate and bond purchases. Importantly, the FOMC has stated that it intends to keep the short-term policy rate low at least until the unemployment rate falls below 6 1/2 percent. This "forward guidance" is meant to convey a sense of how long short-term interest rates will stay near current levels.
There is some confusion about how the Fed's forward guidance and asset purchase program relate to each other. I will give you my view.
In the toolkit the FOMC has at its disposal, there is a sense in which asset purchases and low policy rates are complementary. Asset purchases and forward guidance on interest rates are complements in the sense that they are both designed to put downward pressure on longer-term interest rates....
But there is also a sense in which these tools are substitutes. By substitutes I mean that guidance pointing to a sustained low policy rate and asset purchases are discrete tools that can be deployed independently or in varying combinations. They can be thought of as a particular policy tool mix chosen to fit the circumstances at this particular phase of the recovery.
In other words, there is an important difference between changing the amount of monetary stimulus and changing the tools deployed to provide that stimulus. When the only tool in play is the federal funds rate, equating adjustments in the Fed's policy rate with changes in the stance of monetary policy is, while not completely straightforward, relatively simple. With multiple tools in use, however, gauging the stance of monetary policy requires that the settings of all policy instruments be considered.
Suppose that the FOMC does scale back or end its asset purchases? Can that possibly be consistent with maintaining a constant degree of monetary stimulus? Sure, and one obvious option is to use adjustments to the forward guidance portion of the FOMC's current policy to provide additional stimulus as asset purchases are scaled back. There are pros and cons to that approach, many of which surfaced in the discussion of this paper, by the Federal Reserve Board's Bill English, David Lopez-Salido, and Bob Tetlow, which circulated last week. (See, for example, here, here, and here.)
In any event, a decision to replace asset purchases with some other form of stimulus—be it extending forward guidance or another alternative—would necessarily raise the question: Why bother? One answer might arise from the cost and efficacy considerations that the FOMC has identified as part of the calculus for whether to continue with asset purchases.
Here again, the fact of multiple tools is germane. With the option of different policy mixes, altering the asset purchase program on grounds of cost or efficacy need not mean that the costs of the program are large or the purchases themselves lack effect. It need only mean that the costs might be larger, or the purchases less effective, than providing the same set of stimulus with some alternative set of tools. I give the last word to President Lockhart:
Going forward, it may be appropriate to adjust the policy tool mix. That will depend on circumstances and the economic diagnosis of the moment.
By Dave Altig, executive vice president and research director at the Atlanta Fed
October 18, 2013
Why Was the Housing-Price Collapse So Painful? (And Why Is It Still?)
Foresight about the disaster to come was not the primary reason this year’s Nobel Prize in economics went to Robert Shiller (jointly with Eugene Fama and Lars Hansen). But Professor Shiller’s early claim that a housing-price bubble was full on, and his prediction that trouble was a-comin’, is arguably the primary source of his claim to fame in the public sphere.
Several years down the road, the causes and effects of the housing-price run-up, collapse, and ensuing financial crisis are still under the microscope. Consider, for example, this opinion by Dean Baker, co-director of the Center for Economic and Policy Research:
...the downturn is not primarily a “financial crisis.” The story of the downturn is a simple story of a collapsed housing bubble. The $8 trillion housing bubble was driving demand in the U.S. economy in the last decade until it collapsed in 2007. When the bubble burst we lost more than 4 percentage points of GDP worth of demand due to a plunge in residential construction. We lost roughly the same amount of demand due to a falloff in consumption associated with the disappearance of $8 trillion in housing wealth.
The collapse of the bubble created a hole in annual demand equal to 8 percent of GDP, which would be $1.3 trillion in today’s economy. The central problem facing the U.S., the euro zone, and the U.K. was finding ways to fill this hole.
In part, Baker’s post relates to an ongoing pundit catfight, which Baker himself concedes is fairly uninteresting. As he says, “What matters is the underlying issues of economic policy.” Agreed, and in that light I am skeptical about dismissing the centrality of the financial crisis to the story of the downturn and, perhaps more important, to the tepid recovery that has followed.
Interpreting what Baker has in mind is important, so let me start there. I have not scoured Baker’s writings for pithy hyperlinks, but I assume that his statement cited above does not deny that the immediate post-Lehman period is best characterized as a period of panic leading to severe stress in financial markets. What I read is his assertion that the basic problem—perhaps outside the crisis period in late 2008—is a rather plain-vanilla drop in wealth that has dramatically suppressed consumer demand, and with it economic growth. An assertion that the decline in wealth is what led us into the recession, is what accounts for the depth and duration of the recession, and is what’s responsible for the shallow recovery since.
With respect to the pace of recovery, evidence supports the proposition that financial crises without housing busts are not so unique—or if they are, the data tend to associate financial-related downturns with stronger-than-average recoveries. Mike Bordo and Joe Haubrich, respectively from Rutgers University and the Federal Reserve Bank of Cleveland, argue that the historical record of U.S. recessions leads us to view housing and the pace of residential investment as the key to whether tepid recoveries will follow sharp recessions:
Our analysis of the data shows that steep expansions tend to follow deep contractions, though this depends heavily on when the recovery is measured. In contrast to much conventional wisdom, the stylized fact that deep contractions breed strong recoveries is particularly true when there is a financial crisis. In fact, on average, it is cycles without a financial crisis that show the weakest relation between contraction depth and recovery strength. For many configurations, the evidence for a robust bounce-back is stronger for cycles with financial crises than those without...
Our results also suggest that a sizeable fraction of the shortfall of the present recovery from the average experience of recoveries after deep recessions is due to the collapse of residential investment.
From here, however, it gets trickier to reach conclusions about why changes in housing values are so important.
Simply put, why should there be a “wealth effect” at all? If the price of my house falls and I suffer a capital loss, I do in fact feel less wealthy. But all potential buyers of my house just gained the opportunity to obtain my house at a lower price. For them, the implied wealth gain is the same as my loss. If buyers and sellers essentially behave the same way, why should there be a large impact on consumption? *
I think this notion quickly leads you to the thought there is something fundamentally special about housing assets and that this special role relates to credit markets and finance. This angle is clearly articulated in these passages from a Bloomberg piece earlier in the year, one of a spate of articles in the spring about why rapidly recovering house prices were apparently not driving the recovery into a higher gear:
The wealth effect from rising house prices may not be as effective as it once was in spurring the U.S. economy...
The wealth effect “is much smaller,” said Amir Sufi, professor of finance at the University of Chicago Booth School of Business. Sufi, who participated in last year’s central-bank conference at Jackson Hole, Wyoming, reckons that each dollar increase in housing wealth may yield as little as an extra cent in spending. That compares with a 3-to-5-cent estimate by economists prior to the recession.
Many homeowners are finding they can’t refinance their mortgages because banks have tightened credit conditions so much they’re not eligible for new loans. Most who can refinance are opting not to withdraw equity after the first nationwide decline in house prices since the Great Depression reminded them home values can fall as well as rise...
Others are finding it difficult to refinance because credit has become a lot harder to come by. And that situation could worsen as banks respond to stepped-up government oversight.
“Credit is going to get tighter before it gets easier,” said David Stevens, president and chief executive officer of the Washington-based Mortgage Bankers Association...
“Households that have been through foreclosure or have underwater mortgages or are otherwise credit-constrained are less able than other households to take advantage” of low interest rates, Fed Governor Sarah Bloom Raskin said in an April 18 speech in New York.
(I should note that Sufi et al. previously delved into the relationship between household balance sheets and the economic downturn here.)
A more systematic take comes from the Federal Reserve Board’s Matteo Iacoviello:
Empirically, housing wealth and consumption tend to move together: this could happen because some third factor moves both variables, or because there is a more direct effect going from one variable to the other. Studies based on time-series data, on panel data and on more detailed, recent micro data point suggest that a considerable portion of the effect of housing wealth on consumption reflects the influence of changes in housing wealth on borrowing against such wealth.
That sounds like a financial problem to me and, in the spirit of Baker’s plea that it is the policy that matters, this distinction is more than semantic. The policy implications of an economic shock that alters the capacity to engage in borrowing and lending are not necessarily the same as those that result from a straightforward decline in wealth.
Having said that, it is not so clear how the policy implications are different. One possibility is that diminished access to credit markets also weakens policy-transmission mechanisms, calling for even more aggressive demand-oriented “pump-priming” policies of the sort Dean Baker advocates. But it is also possible that we have entered a period of deep structural repair that only time (and not merely government stimulus) can (or should) engineer: deleveraging and balance sheet repair, sectoral resource reallocation, new consumption habits, new business models driven by both market and regulatory imperatives, you name it.
In my view, it’s not yet clear which policy approach is closest to optimal. But I am fairly well convinced that good judgment will require us to think of the past decade as the financial event it was, and in many ways still is.
*Update: A colleague pointed out that my example describing housing price changes and wealth effects may be simplified to the point of being misleading. Implicitly, I am in fact assuming that the flow of housing services derived from housing assets is fixed, a condition that obviously would not hold in general. See section 3 of the Iacoviello paper cited above for a theoretical description of why, to a first approximation, we would not expect there to be a large consumption effect from changes in housing values.
By Dave Altig, executive vice president and research director at the Atlanta Fed
October 09, 2013
Delving into Labor Markets
Though never far from the headlines, the Federal Reserve's dual mandate comes front and center again with the announcement today of President Obama's nomination of Fed Vice Chair Janet Yellen as the next chair of the Board of Governors. Inevitably, analysis will turn to discussions of who is a hawk and who is a dove, who cares relatively more about inflation, and who cares relatively more about growth and employment.
That's unfortunate, because such characterizations really do miss the point. The debate among different policymakers is not about whether person A is more concerned about jobs and unemployment than person B, but about legitimate and longstanding conversations about what accounts for the performance of labor markets and what role monetary policy might have in the event that performance is judged to be subpar.
As it happens, the Atlanta Fed's most recent contribution to this discussion came last week in the form of the annual employment conference sponsored by the Bank's Center for Human Capital Studies. Organized, as in past years, by Richard Rogerson (Princeton University), Robert Shimer (University of Chicago), and Melinda Pitts (Federal Reserve Bank of Atlanta), the conference explored the causes of the continued weak labor market recovery in the United States. The existing literature has suggested a number of possibilities: wage rigidities, mismatch between workers' skills and the skills required by new jobs, extended unemployment insurance benefits and other government policy changes, and firms' reorganizing and asking workers to do more. The papers sought to analyze and document the importance of these factors for the slow recovery.
One notable policy change in the recent recession was the unprecedented expansion of unemployment insurance (UI) benefits to as long as 99 weeks for a very large fraction of UI-eligible workers. Did this increase play an important role in high levels of unemployment? Two papers from the conference addressed this question from different perspectives. "Do Extended Unemployment Benefits Lengthen Unemployment Spells? Evidence from Recent Cycles in the U.S. Labor Market," by Henry S. Farber and Robert G. Valetta, assessed the extent to which extended UI benefits result in higher unemployment because workers choose to remain unemployed longer. They find a statistically significant effect of longer UI durations on the duration of unemployment spells, but they conclude that the overall contribution to the unemployment rate was less than half a percentage point. Because the aggregate unemployment rate rose by more than 5 percent, this effect accounts for less than 10 percent of the overall increase.
"Unemployment Benefits and Unemployment in the Great Recession: The Role of Macro Effects," by Marcus Hagedorn, Fatih Karahan, Iourii Manovskii, and Kurt Mitman, offered a different perspective. The authors look at the evolution of unemployment rates in counties that are adjacent but lie in different states. They use the fact that the timing of extended benefits occurs at different times across states to identify the effect of extended UI durations on country-level unemployment. They find that the effects are sufficiently large that the increase in UI duration can account for virtually all of the increase in unemployment.
While seemingly at odds, the results of these two studies are consistent. The first paper shows that the decrease in the job-finding rate for workers with relatively longer benefits did not increase that much compared with the rate for workers with shorter-duration benefits, holding the overall unemployment rate constant. The second paper argues that the job-finding rate decreases for everyone when benefits are extended. The authors find that when some workers have access to longer-duration UI benefits, being unemployed is not as painful for them, which puts upward pressure on wages. To the extent that firms cannot target their job openings toward workers without access to UI, firms may be less likely to create jobs, making it harder for all workers to get job offers. The impact on uninsured workers may be as large as the impact on insured workers, and so the microeconomic estimates in Farber and Valetta will not necessarily uncover UI's total impact on the unemployment rate.
The possible role of wage rigidities has figured prominently in many accounts of the large increase in unemployment during the recent recession. Two papers considered the importance of this explanation. "Wage Adjustment in the Great Recession," by Michael Elsby, Donggyun Shin and Gary Solon, used microdata from the U.S. Census Bureau's Current Population Survey to examine the extent to which wages are sticky. The paper finds that there has been less response in average real wages during the recent recession than in previous recessions, perhaps suggesting that real wage rigidity contributed to the large increase in unemployment. However, they also show that wages at the individual level are really quite flexible. Specifically, relatively few individuals have zero nominal wage growth from one year to the next, and many people experience decreases in nominal wage rates.
A key issue in the theoretical literature is the extent to which wage stickiness affects new hires versus existing workers. In "How Sticky Wages in Existing Jobs Can Affect Hiring," authors Mark Bils, Yongsung Chang and Sun-Bin Kim show that even if wages for new hires are completely flexible, they may nonetheless have large effects on unemployment fluctuations when one allows for an "effort decision" for existing workers. This decision means that in response to negative shocks, firms require existing workers to expend more effort given that their wage is fixed, decreasing the need to hire new workers. The authors show that this effect is quantitatively significant and can come close to resolving the unemployment volatility puzzle, which relates to the large fluctuations in unemployment relative to productivity.
An empirical regularity that has appeared in the last few years is an outward shift in the Beveridge curve, which relates the unemployment rate to the level of vacancies. One interpretation of this upward shift is that the matching of unemployed workers and vacancies has worsened. Yet there is a lot of variety in the job-search effort by workers with different characteristics, such as the length of unemployment, whether they are on temporary layoff, and so on. In "Measuring Matching Efficiency with Heterogeneous Jobseekers," Robert Hall and Sam Schulhofer-Wohl devise a method for incorporating this heterogeneity into the analysis and show that there has indeed been a decrease in the matching rate for workers during the last few years. It will be important for future research to determine how much this decrease reflects a decline in search intensity or whether the lower job-finding rates represent a decrease for a given level of search intensity.
Related to the two issues of nominal rigidities and mismatch, in the paper "Labor Mobility within Currency Unions," Emmanuel Farhi and Ivan Werning study the role of labor mobility in diminishing the effects associated with nominal rigidities. For example, some researchers have suggested that a key difference between the apparent success of the United States relative to the euro zone is U.S. labor is more mobile. Farhi and Werning argue that one should not assume the mobility necessarily reduces the effects of nominal rigidities. In particular, they conclude that mobility eases the effects of nominal rigidities only if goods markets are well integrated.
Two papers focused on the nature of worker mobility across firms in the recent recession. In "Worker Flows over the Business Cycle: The Role of Firm Quality," Lisa Kahn and Erika McEntarfer examine recent changes in flows of workers between firms that offer jobs of differing quality. They find that that lower-quality firms decreased both hiring and separations by large and equal amounts, whereas high-quality firms have much smaller declines in both hiring and separations. The net result is that the fraction of workers in lower-quality jobs tends to increase during recessions.
In closely related work, "Did the Job Ladder Fail after the Great Recession?" by Giuseppi Moscarini and Fabien Postel-Vinay, uses data from the U.S. Bureau of Labor Statistics' Job Openings and Labor Turnover Survey (JOLTS) to study the hiring and separation patterns across firms of different sizes. They determine that the pattern of firm growth across size classes was different during this recession than in previous recessions. In particular, they find that following the Lehman Brothers collapse, smaller firms actually fared worse than larger firms, perhaps because financing constraints had more severe consequences for smaller firms.
As the provisions in the Affordable Care Act (ACA) take effect in the coming months, there may be large effects not only on the market for health care but also on the labor market. In particular, the ACA will implicitly introduce taxes and subsidies that will differ across firms and workers of different types. In "Effects of the Affordable Care Act on the Amount and Composition of Labor Market Activity," Trevor Gallen and Casey Mulligan develop a framework to think about how these provisions will influence labor market outcomes across different sectors and worker types, and they use a calibrated version of the model to quantify the effects. The authors predict that the ACA will substantially reduce the return to market work for low-skilled individuals and that a large number of individuals who currently receive health insurance through their employers will end up purchasing insurance through the exchanges established as part of the ACA.
The conference also featured a presentation by Ed Lazear, "The New Normal? Productivity and Employment during the Recession and Recovery." The talk highlighted three themes from Lazear's recent research. First, productivity did not decline in the recent recession—as it typically had done in previous recessions—perhaps reflecting that workers expend more effort during periods of high unemployment since they fear unemployment more in a weak labor market. Second, the unemployment rate is a less useful indicator of the overall state of the labor market during the current recovery (in recent years the decline in the unemployment rate has not been accompanied by an increase in the employment-to-population ratio, since labor force participation has declined). The third theme is that the deterioration in labor market outcomes during the recent recession should be interpreted as cyclical rather than structural and, hence, a labor market recovery is likely once GDP growth is stronger.
We certainly wouldn't claim that the conference put to rest any of the relevant questions that will confront the Federal Open Market Committee and its new chair going forward. But we do believe that continuing to support the dissemination of the type of research presented at this conference gives us a fighting chance.
By Richard Rogerson of Princeton University and Robert Shimer of the University of Chicago, both advisers to the Atlanta Fed's Center for Human Capital Studies, and Melinda Pitts, director of the Atlanta Fed's Center for Human Capital Studies
October 04, 2013
Certain about Uncertainty
The Bloom-Davis index of Economic Policy Uncertainty hit 162 in September, up from 102 in August and the highest level seen since December 2012. With all this uncertainty, we can be certain that the events surrounding the government shutdown are having an impact.
This notion of increased uncertainty is captured nicely in our most recent poll of small businesses in the Southeast (past results available here), which went live on September 30, the day before the government shutdown. Although the survey is still out in the field, some early results show:
- Most firms are expressing more uncertainty (see the chart),
- For a significant portion of firms, uncertainty today is having a greater impact than six months ago, and
- The government is heavily featured as a source of the uncertainty.
Of course, what we really care about is whether higher uncertainty is affecting economic activity. When asked, 45 percent of our respondents indicate that uncertainty is in fact having a greater impact on their business than six months ago, up from 37 percent in the first-quarter 2013 survey (relative to fall 2012). Further, fewer firms so far have indicated that uncertainty is having less of an impact. In the current survey, 9 percent of firms have reported less of an effect, compared with 16 percent at the close of last April's survey.
And what are the sources of uncertainty, as seen by our panel of businesses? Eighty-percent of participants have responded to our open-ended question about the primary source(s) of uncertainty. The following "word cloud" summarizes their views:
We will get more responses to the survey over the next week or so, and these may show a different picture. But we're pretty certain of one thing—the duration of the current fiscal impasse in Washington will make a difference.
By John Robertson, vice president and senior economist, and
Ellyn Terry, economic policy analysis specialist, both in the research department of the Atlanta Fed