The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

September 30, 2016

A Quick Pay Check: Wage Growth of Full-Time and Part-Time Workers

In the last macroblog post we introduced the new version of the nominal Wage Growth Tracker, which allows a look back as far as 1983. We have also produced various cuts of these data comparable to the ones on the Wage Growth Tracker web page to look at the wage dynamics of various types of workers. One of the data cuts compares the median wage growth of people working full-time and part-time jobs. As we have highlighted previously, the median wage growth of part-time workers slowed by significantly more than full-time workers in the wake of the Great Recession. The extended time series allows us to look back farther to see if this phenomenon was truly unique.

The following chart shows the extended full-time/part-time median wage growth time series at an annual frequency.


The chart shows that the median wage increase for part-time workers is generally lower than for full-time workers, with the average gap about 1 percentage point. The reason for the presence of a gap is a bit puzzling. Could it be that part-time workers have lower average productivity growth than full-time workers? It is true that a part-time worker in our data set is more likely to lack a college degree than a full-time worker, and the median wage level for part-time workers is lower than for full-time workers. But interestingly, a reasonably systematic wage growth gap still exists after controlling for differences in the education and age of workers. So even highly educated prime-age, part-time workers tend to have lower median wage growth than their full-time counterparts. If it's a productivity story, its subtext is not easily captured by observed differences in education and experience.

Changes in economic conditions might also be playing a role. The wage growth gap exceeded 2 percentage points in the early 1980s and again between 2011 and 2013, both periods of considerable excess slack in the labor market, as we recently discussed here. In fact, in each of 2011, 2012, and 2013, half of the part-time workers in our dataset experienced no increase in their rate of pay at all.

To explore this possibility further, it's useful to separate part-time workers into those who work part-time because of economic conditions (for example, because of slack work conditions at their employer or their inability to find full-time work) from those who work part-time for noneconomic reasons (for example, because they have family responsibilities or because they are also in school). The following chart shows the median wage growth for full-time, voluntary part-time, and involuntary part-time workers.


Admittedly, there are not that many observations on involuntary part-time workers in our data set. But it does appear that their median wage growth has tended to slow by more after economic downturns than those working part-time for a noneconomic reason—at least prior to the Great Recession. After the last recession, however, the wage growth gap was just about as large for both types of part-time workers. In that sense, the impact of the last recession on the median wage growth of regular part-time workers was quite unusual.

Since 2013, median wage growth for part-time workers has been rising, which is good news for those workers and consistent with the labor market becoming tighter. With the unemployment rate reasonably low, employers might have to worry a bit more about retaining and attracting part-time staff than they did a few years ago.

September 30, 2016 | Permalink | Comments (2)

September 27, 2016

Back to the '80s, Courtesy of the Wage Growth Tracker

Things have been a wee bit quiet in macroblog land the last few weeks, chiefly because our time has been devoted to two exciting new projects. The first is a refresh of our labor force dynamics website, which will feature a nifty tool for looking at the main reasons behind changes in labor force participation for different age groups. More on that later.

The other project has been adding more history to our Wage Growth Tracker. The tracker's current time series starts in 1997. The chart below shows an extended version of the tracker that starts in 1983.


Recall that the Wage Growth Tracker depicts the median of the distribution of 12-month changes of matched nominal hourly earnings. In the extended time series, you'll notice two gaps, which resulted from the U.S. Census Bureau scrambling the identifiers in its Current Population Survey. For those two periods, you'll have to use your imagination and make some inferences.

As we have emphasized previously, the Wage Growth Tracker is not a direct measure of the typical change in overall wage costs because it only looks at (more or less) continuously employed workers. But it should reflect the amount of excess slack in the labor market. This point is illustrated in the following chart, which compares the Wage Growth Tracker with the unemployment gap computed from the Congressional Budget Office's (CBO) estimate of the long-run natural rate of unemployment.


As the chart shows, our measure of nominal wage growth has historically tracked the cyclical movement in the unemployment rate gap estimate fairly well, at least since the mid-1980s. We think this feature is potentially important, because the true unemployment rate gap is very hard to know in real time and hence is subject to potentially large revision. For example, in real time, the unemployment rate was estimated to have fallen below the natural rate in the fourth quarter of 1994, but it is now thought to have not breached the natural rate until the first quarter of 1997—more than two years later. The Wage Growth Tracker is not subject to revision (although it is subject to a small amount of sampling uncertainty) and hence could be useful in evaluating the reliability of the unemployment rate gap estimate in real time.

This also is important from a monetary policy perspective if we are worried about the risk of the economy overheating. For example, President Rosengren of the Boston Fed described why he dissented at the most recent Federal Open Market Committee meeting in favor of a quarter-point increase in the target range for the federal funds rate. His dissent, he said, arose partly from his concern that the economy may overheat and drive unemployment below a level he believes is sustainable.

Currently, the CBO estimate of the unemployment rate gap looks like it is plateauing at close to zero. The fact that the Wage Growth Tracker for the third quarter slowed a bit is consistent with that. But it's only one quarter of data, and so we'll closely monitor the Wage Growth Tracker in the coming months to see what it suggests about the actual unemployment rate gap. We'll discuss what observations we make here.

September 27, 2016 | Permalink | Comments (0)

September 08, 2016

Introducing the Atlanta Fed's Taylor Rule Utility

Simplicity isn't always a virtue, but when it comes to complex decision-making processes—for example, a central bank setting a policy rate—having simple benchmarks is often helpful. As students and observers of monetary policy well know, the common currency in the central banking world is the so-called "Taylor rule."

The Taylor rule is an equation introduced by John Taylor in a seminal 1993 paper that prescribes a value for the federal funds rate—the interest rate targeted by the Federal Open Market Committee (FOMC)—based on readings of inflation and the output gap. The output gap measures the percentage point difference between real gross domestic product (GDP) and an estimate of its trend or potential.

Since 1993, academics and policymakers have introduced and used many alternative versions of the rule. The alternative forms of the rule can supply policy prescriptions that differ significantly from Taylor's original rule, as the following chart illustrates.

Effective federal funds rate and prescriptions from alternative versions of the Taylor rule

The green line shows the policy prescription from a rule identical to the one in Taylor's paper, apart from some minor changes in the inflation and output gap measures. The red line uses an alternative and commonly used rule that gives the output gap twice the weight used for the Taylor (1993) rule, derived from a 1999 paper by John Taylor. The red line also replaces the 2 percent value used in Taylor's 1993 paper with an estimate of the natural real interest rate, called r*, from a paper by Thomas Laubach, the Federal Reserve Board's director of monetary affairs, and John Williams, president of the San Francisco Fed. Federal Reserve Chair Janet Yellen also considered this alternative estimate of r* in a 2015 speech.

Both rules use real-time data. The Taylor (1993) rule prescribed liftoff for the federal funds rate materially above the FOMC's 0 to 0.25 percent target range from December 2008 to December 2015 as early as 2012. The alternative rule did not prescribe a positive fed funds rate since the end of the 2007–09 recession until this quarter. The third-quarter prescriptions incorporate nowcasts constructed as described here. Neither the nowcasts nor the Taylor rule prescriptions themselves necessarily reflect the outlook or views of the Federal Reserve Bank of Atlanta or its president.

Additional variables that get plugged into this simple policy rule can influence the rate prescription. To help you sort through the most common variations, we at the Atlanta Fed have created a Taylor Rule Utility. Our Taylor Rule Utility gives you a number of choices for the inflation measure, inflation target, the natural real interest rate, and the resource gap. Besides the Congressional Budget Office–based output gap, alternative resource gap choices include those based on a U-6 labor underutilization gap and the ZPOP ratio. The latter ratio, which Atlanta Fed President Dennis Lockhart mentioned in a November 2015 speech while addressing the Taylor rule, gauges underemployment by measuring the share of the civilian population working their desired number of hours.

Many of the indicator choices use real-time data. The utility also allows you to establish your own weight for the resource gap and whether you want the prescription to put any weight on the previous quarter's federal funds rate. The default choices of the Taylor Rule Utility coincide with the Taylor (1993) rule shown in the above chart. Other organizations have their own versions of the Taylor Rule Utility (one of the nicer ones is available on the Cleveland Fed's Simple Monetary Policy Rules web page). You can find more information about the Cleveland Fed's web page on the Frequently Asked Questions page.

Although the Taylor rule and its alternative versions are only simple benchmarks, they can be useful tools for evaluating the importance of particular indicators. For example, we see that the difference in the prescriptions of the two rules plotted above has narrowed in recent years as slack has diminished. Even if the output gap were completely closed, however, the current prescriptions of the rules would differ by nearly 2 percentage points because of the use of different measures of r*. We hope you find the Taylor Rule Utility a useful tool to provide insight into issues like these. We plan on adding further enhancements to the utility in the near future and welcome any comments or suggestions for improvements.

September 8, 2016 in Banking, Federal Reserve and Monetary Policy, Monetary Policy | Permalink | Comments (0)

August 15, 2016

Payroll Employment Growth: Strong Enough?

The U.S. Bureau of Labor Statistics' estimate of nonfarm payroll employment is the most closely watched indicator of overall employment growth in the U.S. economy. By this measure, employment increased by 255,000 in July, well above the three-month average of 190,000. Yet despite this outsized gain, the unemployment rate barely budged. What gives?

Well, for a start, there is no formal connection between the payroll employment data and the unemployment rate data. The employment data used to construct the unemployment rate come from the Current Population Survey (CPS) and the payroll employment data come from a different survey. However, it is possible to relate changes in the unemployment rate to the gap between the CPS and payroll measures of employment, as well as changes in the labor force participation (LFP) rate, and the growth of payroll employment relative to the population.

The following chart shows the contribution of each of these three factors to the monthly change in the unemployment rate during the last year.

Contributions to the 1-month change in the unemployment rate

A note about the chart: The CPS employment and population measures have been smoothed to account for annual population control adjustments. The smoothed employment data are available here. The method used to compute the contributions is available here.

The black line is the monthly change in the unemployment rate (unrounded). Each green segment of a bar is the change in the unemployment rate coming from the gap between population growth and payroll employment growth. Because payroll employment has generally been growing faster than the population, it has helped make the unemployment rate lower than it otherwise would have been.

But as the chart makes clear, the other two factors can also exert a significant influence on the direction of the unemployment rate. The labor force participation rate contribution (the red segments of the bars) and the contribution from the gap between the CPS and payroll employment measures (blue segments) can vary a lot from month to month, and these factors can swamp the payroll employment growth contribution.

So any assumption that strong payroll employment gains in any particular month will automatically lead to a decline in the unemployment rate could, in fact, be wrong. But over longer periods, the mapping is a bit clearer because it is effectively smoothing the month-to-month variation in the three factors. For example, the following chart shows the contribution of the three factors to 12-month changes in the unemployment rate from July 2012 to July 2013, from July 2013 to July 2014, and so on.

Contributions to the 12-month change in the unemployment rate

Gains in payroll employment relative to the population have helped pull the unemployment rate lower. Moreover, prior to the most recent 12 months, declines in the LFP rate put further downward pressure on the unemployment rate. Offsetting this pressure to varying degrees has been the fact that the CPS measure of employment has tended to increase more slowly than the payroll measure, making the decline in the unemployment rate smaller than it would have been otherwise. During the last 12 months, the LFP rate turned positive on balance, meaning that the magnitude of the unemployment rate decline has been considerably less than implied by the relative strength of payroll employment growth.

Going forward, another strong payroll employment reading for August is certainly no guarantee of a corresponding decline in the unemployment rate. But as shown by my colleagues David Altig and Patrick Higgins in an earlier macroblog post, under a reasonable range of assumptions for the trend path of population growth, the LFP rate, and the gap between the CPS and payroll survey measures of employment, payroll growth averaging above 150,000 a month should be enough to cause the unemployment rate to continue declining.

August 15, 2016 in Employment, Labor Markets, Unemployment | Permalink | Comments (0)

August 11, 2016

Forecasting Loan Losses for Stress Tests

Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.

The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.

The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.

To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.

Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.

With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.

My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.

One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.

Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.

Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.

Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.

August 11, 2016 in Banking, Financial System, Regulation | Permalink | Comments (1)

July 29, 2016

Men at Work: Are We Seeing a Turnaround in Male Labor Force Participation?

A lot has been written about the long-run decline in the labor force participation (LFP) rate among prime-age men (usually defined as men between 25 and 54 years of age). For example, see here, here, here, and here for some perspectives.

On a not seasonally adjusted basis, the Bureau of Labor Statistics estimates that the LFP rate among prime-age males is down from 90.9 percent in the second quarter of 2007 to 88.6 percent in the second quarter of 2016—a decline of 2.3 percentage points, or around 1.4 million potential workers.

Many explanations reflecting preexisting structural trends have been posited for this decline. But how much of the decline also reflects cyclical effects and, in particular, cyclical effects that take a while to play out? We don't really know for sure. But one potentially useful approach is to look at the Census Bureau's Current Population Survey and the reasons people give for not wanting a job. These reasons include enrollment in an educational program (especially prevalent among young individuals), family or household responsibilities (especially among prime-age women), retirement (especially among older individuals), and poor health or disability (widespread). In addition, there are people of all ages who say they want a job but are not counted as unemployed. For example, they aren't currently available to work or haven't looked for work recently because they are discouraged about their job prospects.

To get some idea of the relative importance of these factors, the following chart shows how much each nonparticipation reason accounted for the total change in the LFP rate among prime-age males between 2012 and 2014 and between 2014 and 2016. The black bars show each period's total change in the LFP rate. The green bars are changes that helped push participation higher than it otherwise would have been, and the orange bars are changes that helped hold participation lower than it otherwise would have been.


A note on the chart: To construct the contributions derived from changes in nonparticipation rates, I held constant the age-specific population shares in the base period (2012 and 2014, respectively) in order to separate the effect of changes in nonparticipation from shifts in the age distribution.

Notice that the decline in the prime-age male LFP rate between 2012 and 2014 has essentially fully reversed itself over the last two years (from a decline of 0.53 percentage points to an increase of 0.55 percentage points, respectively). The positive "want a job" contribution in both periods clearly reflects a cyclical recovery in labor market conditions. But the most striking change between 2012–14 and 2014–16 is the complete reversal of the large drag attributable to poor health and disability. Other things equal, if nonparticipation resulting from poor health and disability had stayed at its 2012 level, prime-age male participation in 2014 would have only declined 0.10 percentage points. If nonparticipation due to poor health and disability had stayed at its 2014 level, prime-age male participation in 2016 would have increased only 0.14 percentage points.

The incidence of self-reported nonparticipation among prime-age men because of poor health or disability has been declining recently. According to the Current Population Survey data, this reason represented 5.4 percent of the prime-age male population in the second quarter of 2016. Although this is still 0.7 percentage points higher than in 2007, it is 0.3 percentage points lower than in 2014. Some of this turnaround could be the result of changes in the composition of the prime-age population. But not much. Around 90 percent of the LFP rate change because of poor health and disability is due to age-specific nonparticipation rather than shifts in the age distribution, suggesting that some of the turnaround in the incidence of people saying they are "too sick" to work is a cyclical response to strengthening labor market conditions. We've yet to see how much longer this turnaround could continue, but it's an encouraging development.

For those interested in exploring the contributions to the changes in the LFP rate by gender and age over different time periods, we're currently developing an interactive tool for the Atlanta Fed's website—stay tuned!

July 29, 2016 in Employment, Labor Markets | Permalink | Comments (2)

July 18, 2016

What’s Moving the Market’s Views on the Path of Short-Term Rates?

As today's previous macroblog post highlighted, it seems that the United Kingdom's vote to leave the European Union—commonly known as the Brexit—got the attention of business decision makers and made their business outlook more uncertain.

How might this uncertainty be weighing on financial market assessments of the future path for Fed policy? Several recent articles have opined, often citing the CME Group's popular FedWatch tool, that the Brexit vote increased the probability that the Federal Open Market Committee (FOMC) might reverse course and lower its target for the fed funds rate. For instance, the Wall Street Journal reported on June 28 that fed funds futures contracts implied a 15 percent probability that rates would increase 25 basis points and an 8 percent probability of a 25 basis point decrease by December's meeting. Prior to the Brexit vote, the probabilities of a 25 basis point increase and decrease by December's meeting were roughly 50 percent and 0 percent, respectively.

One limitation of using fed funds futures to assess market participant views is that this method is restricted to calculating the probability of a rate change by a fixed number of basis points. But what if we want to consider a broader set of possibilities for FOMC rate decisions? We could look at options on fed funds futures contracts to infer these probabilities. However, since the financial crisis their availability has been quite limited. Instead, we use options on Eurodollar futures contracts.

Eurodollars are deposits denominated in U.S. dollars but held in foreign banks or in the foreign branches of U.S. banks. The rate on these deposits is the (U.S. dollar) London Interbank Offered Rate (LIBOR). Because Eurodollar deposits are regulated similarly to fed funds and can be used to meet reserve requirements, financial institutions often view Eurodollars as close substitutes for fed funds. Although a number of factors can drive a wedge between otherwise identical fed funds and Eurodollar transactions, arbitrage and competitive forces tend to keep these differences relatively small.

However, using options on Eurodollar futures is not without its own challenges. Three-month Eurodollar futures can be thought of as the sum of an average three-month expected overnight rate (the item of specific interest) plus a term premium. Each possible target range for fed funds is associated with its own average expected overnight rate, and there may be some slippage between these two. Additionally, although we can use swaps market data to estimate the expected term premium, uncertainty around this expectation can blur the picture somewhat and make it difficult to identify specific target ranges, especially as we look farther out into the future.

Despite these challenges, we feel that options on Eurodollar futures can provide a complementary and more detailed view on market expectations than is provided by fed funds futures data alone.

Our approach is to use the Eurodollar futures option data to construct an entire probability distribution of the market's assessment of future LIBOR rates. The details of our approach can be found here. Importantly, our approach does not assume that the distribution will have a typical bell shape. Using a flexible approach allows multiple peaks with different heights that can change dynamically in response to market news.

The results of this approach are illustrated in the following two charts for contracts expiring in September (left-hand chart) and December (right-hand chart) of this year for the day before and the day after Brexit. With these distributions in hand, we can calculate the implied probabilities of a rate change consistent with what you would get if you simply used fed funds futures. However, we think that specific features of the distributions help provide a richer story about how the market is processing incoming information.

Prior to the Brexit vote (depicted by the green curve), market participants were largely split in their assessment on a rate increase through September's FOMC meeting, as indicated by the two similarly sized modes, or peaks, of the distribution. Post-Brexit (depicted by the blue curve), most weight was given to no change, but with a non-negligible probability of a rate cut (the mode on the left between 0 and 25 basis points). For December's FOMC meeting, market participants shifted their views away from the likelihood of one additional increase in the fed funds target toward the possibility that the FOMC leaves rates where they are currently.

The market turmoil immediately following the vote subsided somewhat over the subsequent days. The next two charts indicate that by July 7, market participants seem to have backed away from the assessment that a rate cut may occur this year, evidenced by the disappearance of the mode between 0 and 25 basis points (show by the green curve). And following the release of the June jobs report from the U.S. Bureau of Labor Statistics on July 8, market participants increased their assessment of the likelihood of a rate hike by year end, though not by much (see the blue curve). However, the labor report was, by itself, not enough to shift the market view that the fed funds target is unlikely to change over the near future.

One other feature of our approach is that comparing the heights of the modes across contracts allows us to assess the market's relative certainty of particular outcomes. For instance, though the market continues to put the highest weight on "no move" for both September and December, we can see that the market is much less certain regarding what will happen by December relative to September.

The greater range of possible rates for December suggests that there is still considerable market uncertainty about the path of rates six months out and farther. And, as we saw with the labor report release, incoming data can move these distributions around as market participants assess the impact on future FOMC deliberations.

July 18, 2016 in Europe, Interest Rates, Monetary Policy | Permalink | Comments (1)

Lockhart Casts a Line into the Murky Waters of Uncertainty

Is uncertainty weighing down business investment? This recent article makes the case.

Uncertainty as an obstacle to business decision making and perhaps even a "propagation mechanism" for business cycles is an idea that that has been generating a lot of support in economic research in recent years. Our friend Nick Bloom has a nice summary of that work here.

Last week, the boss here at the Atlanta Fed gave the trout in the Snake River a break and made some observations on the economy to the Rocky Mountain Economic Summit, casting a line in the direction of economic uncertainties. Among his remarks, he noted that:

The minutes of the June FOMC [Federal Open Market Committee] meeting clearly pointed to uncertainty about employment momentum and the outcome of the vote in Britain as factors in the Committee's decision to keep policy unchanged. I supported that decision and gave weight to those two uncertainties in my thinking.

At the same time, I viewed both the implications of the June jobs report and the outcome of the Brexit vote as uncertainties with some resolution over a short time horizon. We've seen, now, that the vote outcome may be followed by a long tail of uncertainty of quite a different character.

But he followed that with something of a caution…

If uncertainty is a real causative factor in economic slowdowns, it needs to be better understood. Policymaking would be aided by better measurement tools. For example, it would help me as a policymaker if we had a firmer grip on the various channels through which uncertainty affects decision-making of economic actors.

I have been thinking about the different kinds of uncertainty we face. Often we policymakers grapple with uncertainty associated with discrete events. The passage of the event to a great extent resolves the uncertainty. The outcome of the Brexit referendum would be known by June 24. The interpretation of the May employment report would come clear, or clearer, with the arrival of the June employment report on July 8. I would contrast these examples of short-term, self-resolving uncertainty with long-term, persistent, chronic uncertainty such as that brought on by the Brexit referendum outcome.

As President Lockhart indicated in his speech, the Federal Reserve Bank of Atlanta conducts business surveys that attempt to measure the uncertainties that businesses face. From July 4 through July 8, we had a survey in the field with a question on how the Brexit referendum was influencing business decisions.

We asked firms to indicate how the outcome of the Brexit vote affected their sales growth outlook. Respondents could select a range of sentiments from "much more certain" to "much more uncertain."

Responses came from 244 firms representing a broad range of sectors and firm sizes, with roughly one-third indicating their sales growth outlook was "somewhat" or "much" more uncertain as a result of the vote (see the chart). Those noting heightened uncertainty were not concentrated in any one sector or firm-size category but represented a rather diverse group.

Chart: Which of the following best describes how the outcome of the recent referendum in Great Britain (so-called Brexit) has affected your sales growth outlook?

As President Lockhart noted in his speech, "[w]e had a spirited internal discussion of whether one-third is a big number or not-so-big." Ultimately, we decided that uncovering how these firms planned to act in light of their elevated uncertainty was the important focus.

In an open-ended, follow-up question, we then asked those whose sales growth outlook was more uncertain how their plans might change. We found that the most prevalent changes in planning were a reduction in capital spending and hiring. Many firms mentioned these two topics in tandem, as this rather succinct quote illustrates: "Slower hiring and lower capital spending." Our survey data, then, provide some support for the idea that uncertainties associated with Brexit were, in fact, weighing on firm investment and labor decisions.

Elevated measures of financial market and economic policy uncertainty immediately after the Brexit vote have abated somewhat over subsequent days. Once the "waters clear," as our boss would say, perhaps this will be the case for firms as well.

July 18, 2016 in Business Inflation Expectations, Economic conditions, Economic Growth and Development, Europe | Permalink | Comments (0)

July 15, 2016

How Will Employers Respond to New Overtime Regulations?

As of December 1, 2016, employers will face expanded coverage of overtime regulations. Most hourly workers are already, and will continue to be, eligible to receive overtime pay for work over 40 hours a week. However, under the new rules, most salaried workers making less than $47,476 ($22.83 per hour for a full-time, full-year worker) will be eligible for overtime pay. Currently, the maximum salary for qualifying for overtime pay is $23,660, or $11.38 per hour.

The Labor Department estimates that the new rule would currently apply to about 4.2 million salaried workers who earn above the old threshold but below the new one. But how many workers are actually affected by the new rule and what happens to the overall demand for labor will depend a lot on how employers respond.

At this stage, it's not clear just how employers will respond. But based on our conversations with local businesses, employers seem to be considering several options for workers whom the new rule would cover. These include:

  • Keeping their salary the same but monitoring and paying for the overtime hours worked.
  • Increasing their salary to just above the threshold to avoid paying overtime.
  • Splitting the hours worked for the job across more people, possibly by hiring additional staff to work the overtime hours.
  • Converting salaried employees to hourly and reducing their base hourly rate so that their total pay will remain the same as under their current salary.
  • Curtailing certain business activities, such as networking and training activities, that might occur outside of the standard eight-hour day.
  • Reducing staff levels elsewhere in the business and/or cutting other employee expenses to offset the increased cost of overtime.

The first two responses outlined above will result in additional employee costs. But trying to avoid these higher costs could itself prove to be expensive. For example, hiring additional workers to cover the overtime comes with fixed staffing costs—including state and federal unemployment insurance tax on new workers, and perhaps benefits—not to mention any hiring and training costs involved in recruiting new workers. In addition, there is a risk that splitting a job between multiple workers or hiring less experienced workers to cover the extra hours will reduce productivity. Also, staff morale could suffer as a result of any actions perceived as infringing on employee rights or status.

These new rules have many dimensions and potential implications (see, for example, here, here, and here for some discussion), and getting a handle on the effects is complicated by the fact that employer responses will likely differ across industries and possibly even across jobs within a firm. Hopefully, a somewhat clearer picture of the ramifications of the new overtime rule will begin to emerge as the time of implementation gets nearer.

July 15, 2016 | Permalink | Comments (0)

How Good Is The Employment Trend? Decide for Yourself

The post-announcement commentary on last Friday's June employment report strikes us as about right: Not as spectacular as the 287,000 number the Bureau of Labor Statistics (BLS) reported for the month, but much better than the worst of our fears.

From the Wall Street Journal's wrap-up of economist reaction, here's Joseph Brusuelas:

The 147,000 three-month average is a fair representation what an economy at full employment looks like late in the U.S. business cycle. We anticipate that as the business cycle enters the final innings of the cyclical expansion that monthly job growth will slow towards 100,000, which represents the number necessary to stabilize the unemployment rate, which climbed to 4.9% in June due to an increase of 417,000 individuals that entered the workforce.

The consensus opinion is that observers should focus less on the monthly number and more on the three-month average, a vantage point we certainly endorse. We also think the reference point of the "number necessary to stabilize the unemployment rate" is the right way to decide whether a number like 147,000 net job gains is strong or not so strong.

The 100,000 unemployment-stabilizing job-gains statistic seems reasonable to us, but the average and median estimate from an April Wall Street Journal survey pegged the same statistic at 145,000. The three-month average job gain is comfortably above the former estimate but not the latter.

Where you stand on the number of job gains required to stabilize the unemployment rate is determined by your assumptions about the pace of civilian population growth (ages 16 and above), the labor force participation rate (LFPR), and the relationship between the payroll employment numbers and the comparable household survey statistic (from whence the unemployment rate is derived). Of course, you can always go to the Atlanta Fed's very own Jobs Calculator and input your assumptions yourself. But if you are like us, you may be more inclined to think in terms of a range of plausible numbers.

Here's our take on what some reasonable bounds on these assumptions might look like.

With respect to population growth, we assume a baseline growth rate equal to the same 1.0 percent annual rate that it has grown over the past year—after accounting for the artificially large population increase of 461,000 in January resulting from the BLS incorporating updated population estimates from the U.S. Census Bureau—with high and low growth alternatives of plus and minus one-tenth of a percentage point.

Second, our baseline for the LFPR is a decline of 0.226 percentage points per year, essentially the impact that we would attribute to age- and sex-related demographic changes over the past two years. Our low-side alternative assumption would be a larger 0.386 annual percentage decline in the LFPR, which adds in the average decline in the participation rate since February 2008 not due to demographic changes. Our high-side assumption is that the LFPR remains at its current level.

Finally, we note that the ratio of employment measured by the BLS payroll survey to employment measured by the household survey has been drifting up for several years. We have chosen a baseline assumption equal to the trend in this ratio since August 2005, and a high-side assumption chooses the steeper trajectory realized since February 2008. Since both August 2005 and February 2008, the unemployment rate has been unchanged, on balance.

The three scenarios for each assumption, in all combinations, yield 27 different implications for the number of payroll jobs required to maintain the unemployment rate at its current level (see the table).

These calculations generate a range of about 40,000 jobs per month to about 140,000 jobs per month. Our baseline assumptions suggest the unemployment rate would stabilize at payroll gains of about 80,000 per month, making the roughly 150,000 monthly average seen during the past quarter of a year look pretty good.

But we're not here to convince you of that today. You've got the numbers above. As we said at the outset, you can decide for yourself.

July 15, 2016 | Permalink | Comments (0)

Google Search

Recent Posts

October 2016

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          



Powered by TypePad