The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

« July 2016 | Main | September 2016 »

August 15, 2016

Payroll Employment Growth: Strong Enough?

The U.S. Bureau of Labor Statistics' estimate of nonfarm payroll employment is the most closely watched indicator of overall employment growth in the U.S. economy. By this measure, employment increased by 255,000 in July, well above the three-month average of 190,000. Yet despite this outsized gain, the unemployment rate barely budged. What gives?

Well, for a start, there is no formal connection between the payroll employment data and the unemployment rate data. The employment data used to construct the unemployment rate come from the Current Population Survey (CPS) and the payroll employment data come from a different survey. However, it is possible to relate changes in the unemployment rate to the gap between the CPS and payroll measures of employment, as well as changes in the labor force participation (LFP) rate, and the growth of payroll employment relative to the population.

The following chart shows the contribution of each of these three factors to the monthly change in the unemployment rate during the last year.

Contributions to the 1-month change in the unemployment rate

A note about the chart: The CPS employment and population measures have been smoothed to account for annual population control adjustments. The smoothed employment data are available here. The method used to compute the contributions is available here.

The black line is the monthly change in the unemployment rate (unrounded). Each green segment of a bar is the change in the unemployment rate coming from the gap between population growth and payroll employment growth. Because payroll employment has generally been growing faster than the population, it has helped make the unemployment rate lower than it otherwise would have been.

But as the chart makes clear, the other two factors can also exert a significant influence on the direction of the unemployment rate. The labor force participation rate contribution (the red segments of the bars) and the contribution from the gap between the CPS and payroll employment measures (blue segments) can vary a lot from month to month, and these factors can swamp the payroll employment growth contribution.

So any assumption that strong payroll employment gains in any particular month will automatically lead to a decline in the unemployment rate could, in fact, be wrong. But over longer periods, the mapping is a bit clearer because it is effectively smoothing the month-to-month variation in the three factors. For example, the following chart shows the contribution of the three factors to 12-month changes in the unemployment rate from July 2012 to July 2013, from July 2013 to July 2014, and so on.

Contributions to the 12-month change in the unemployment rate

Gains in payroll employment relative to the population have helped pull the unemployment rate lower. Moreover, prior to the most recent 12 months, declines in the LFP rate put further downward pressure on the unemployment rate. Offsetting this pressure to varying degrees has been the fact that the CPS measure of employment has tended to increase more slowly than the payroll measure, making the decline in the unemployment rate smaller than it would have been otherwise. During the last 12 months, the LFP rate turned positive on balance, meaning that the magnitude of the unemployment rate decline has been considerably less than implied by the relative strength of payroll employment growth.

Going forward, another strong payroll employment reading for August is certainly no guarantee of a corresponding decline in the unemployment rate. But as shown by my colleagues David Altig and Patrick Higgins in an earlier macroblog post, under a reasonable range of assumptions for the trend path of population growth, the LFP rate, and the gap between the CPS and payroll survey measures of employment, payroll growth averaging above 150,000 a month should be enough to cause the unemployment rate to continue declining.

August 15, 2016 in Employment, Labor Markets, Unemployment | Permalink


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

August 11, 2016

Forecasting Loan Losses for Stress Tests

Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.

The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.

The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.

To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.

Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.

With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.

My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.

One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.

Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.

Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.

Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.

August 11, 2016 in Banking, Financial System, Regulation | Permalink


When looking at these short duration data sets on losses, you have to go back to the fundamentals of the situation.

Housing had a near 50 year long series of only regional losses. The statistical analysts (which was everybody except the bond bankers) assumed therefore losses were highly unlikely.

Meanwhile, fundamentals were stagnant wages, stagnant lifetime incomes, increasing share going to housing, education, loss of retirement. The fundamental analyst could see the losses were already there with certainty. The fundamental analyst though could not say when they would show up.

And when they did, all the risks correlated.

You guys should promote separating risk business from deposit taking, full stop.

Posted by: john | August 12, 2016 at 09:05 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search

Recent Posts

February 2017

Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28        



Powered by TypePad