May 03, 2013
Building a Better Jobs Calculator: Choose Your Own Payroll/Household Employment Ratio
To provide even greater flexibility, the Federal Reserve Bank of Atlanta's Jobs Calculator has been enhanced to allow the user to adjust another statistic used in the calculations. The statistic is the ratio of payroll to household employment and is a necessary component that links the target unemployment rate with the resulting required payroll employment growth.
The fact that estimates of payroll and household employment numbers reported by the U.S. Bureau of Labor Statistics (BLS) differ each month received a lot of attention in the fall of 2012 when, in October, the BLS reported a whopping 0.3 percentage point drop in the unemployment rate, accompanied by a rather tepid growth of 114,000 jobs in payroll employment. The culprit in that apparent incongruity is that the Household Survey (from where we get the unemployment rate) reported a gain of 873,000 jobs. That particular employment report (and its divergent statistics) received extra attention since it was the last employment report before the November 2012 election.
As Atlanta Fed Research Director Dave Altig pointed out at the time (in this blog post) and as others discussed (here and here), the two most important measures of labor market conditions come from two different surveys—the Establishment Survey, which produces the payroll employment number from the Current Employment Statistics (CES) program, and the Household Survey, which produces the unemployment rate from the Current Population Survey (CPS). Both surveys claim to estimate the number of jobs in the economy. However, the employment numbers they produce are different for several reasons, detailed in one of the Jobs Calculator's FAQs.
The good news is that even though there may be wide discrepancy in the change in employment reported by the two surveys in any particular month (as we saw in October 2012), any one-month divergence does not persist. In other words, the two employment series closely track each other.
This is good news for the Jobs Calculator, since a conversion needs to be made between the CPS employment implied by the target unemployment rate entered into the Jobs Calculator and the average monthly change in payroll employment (CES) needed to achieve the target unemployment rate. Since the two series closely track each other, wide deviations in month-to-month reported growth numbers will not severely affect the ability of the Jobs Calculator to make longer-term projections (within the limits of the other assumptions of the calculator). (In fact, unanticipated changes in the labor force participation rate are much more potentially problematic in making longer-term projections than are any potential variations in the conversion rate between CPS and CES employment numbers.)
The Jobs Calculator uses the average ratio of CES/CPS employment over the previous 12 months as the default conversion factor and now allows the user to see what happens if that ratio were to be different.
The following example illustrates how innocuous that conversion factor is.
Suppose the goal is to attain a 6.5 percent unemployment rate in two years. Entering 6.5 in the unemployment rate target box and 24 in the box (for the number of months you want to take to get there) yields 164,917 as the average monthly change in payroll employment needed to achieve that goal (holding everything else constant).
Next, go down to the new line showing, "Average monthly CES/CPS employment ratio." You'll see the current default value for the ratio is 0.940. Click on the chart box on the far right of that line. You'll see that since 1980, that ratio has ranged from a low of just under 0.900 in about 1984 to a high of 0.969 just before 2000. Close the box.
Now, enter the low ratio number of 0.900 in the employment ratio box. At that low ratio, only 157,899 payroll jobs are needed to achieve your 6.5 percent unemployment rate in two years.
Next, enter the high ratio number of 0.969 in the employment ratio box. At that high ratio, 170,005 payroll jobs are needed each month to achieve your goal.
The current default CES/CPS ratio provides an estimated number of monthly payroll jobs needed to achieve your specified goal of a 6.5 percent unemployment rate in two years within a 5,000–7,000 job margin, based on the highest and lowest ratio values since 1980.
The bottom line? When it comes to factors that can derail a longer-term projection of the number of jobs needed to attain a specific unemployment rate in a given period of time, the degree to which household employment estimates deviate from payroll employment estimates is just not that important, nor are monthly discrepancies in these series’ reported growth, since the discrepancies aren’t absorbed into the trends. And there you have the reason we added this flexibility to the Jobs Calculator: to allow users to see this for themselves.
By Julie Hotchkiss, research economist and policy adviser in the research department of the Atlanta Fed
April 22, 2013
Too Big to Fail: Not Easily Resolved
As Fed Chairman Ben Bernanke has indicated, too-big-to-fail (TBTF) remains a major issue that is not solved, but “there’s a lot of work in train.” In particular, he pointed to efforts to institute Basel III capital standards and the orderly liquidation authority in Dodd-Frank. The capital standards seek to lower the probability of insolvency in times of financial stress, while the liquidation authority attempts to create a credible mechanism to wind down large institutions if necessary. The Atlanta Fed’s flagship Financial Markets Conference (FMC) recently addressed various issues related to both of these regulatory efforts.
The Basel capital standards are a series of international agreements on capital requirements reached by the Basel Committee on Banking Supervision. These requirements are referred to as “risk-weighted” because they tie the required amount of bank capital to an estimate of the overall riskiness of each bank’s portfolio. Put simply, riskier banks need to hold more capital under this system.
The first iteration of the Basel requirements, known as Basel I, required only 30 pages of regulation. But over time, banks adjusted their portfolios in response to the relatively simple risk measures in Basel I, and these measures became insufficient to characterize bank risk. The Basel Committee then shifted to a more complex system called Basel II, which allows the most sophisticated banks to estimate their own internal risk models subject to supervisory approval and use these models to calculate their required capital. After the financial crisis, supervisors concluded that Basel II did not require enough capital for certain types of transactions and agreed that a revised version called Basel III should be implemented.
At the FMC, Andrew Haldane from the Bank of England gave a fascinating recap of the Basel capital standards as a part of a broader discussion on the merits of complex regulation. His calculations show that the Basel accords have become vastly more complex, with the number of risk weights applied to bank positions increasing from only five in the Basel I standards to more than 200,000 in the current Basel III standards.
Haldane argued that this increase in complexity and reliance on banks’ internal risk models has unfortunately not resulted in a fair or credible system of capital regulation. He pointed to supervisory studies revealing wide disparities across banks in their estimated capital requirements for a hypothetical common portfolio. Further, Haldane pointed to a survey of investors by Barclays Capital in 2012 showing, not surprisingly, that investors do not put a great deal of trust in the Basel weightings.
So is the problem merely that the Basel accords have taken the wrong technical approach to risk measurement? The conclusion of an FMC panel on risk measurement is: not necessarily. The real problem is that estimating a bank’s losses in unlikely but not implausible circumstances is at least as much an art as it is a science.
Til Schuermann of Oliver Wyman gave several answers to the question “Why is risk management so hard?” including the fact that we (fortunately) don’t observe enough bad events to be able to make good estimates of how big the losses could become. As a result, he said, much of what we think we know from observations in good times is wrong when big problems hit: we estimate the wrong model parameters, use the wrong statistical distributions, and don’t take account of deteriorating relationships and negative feedback loops.
David Rowe of David M. Rowe Risk Advisory gave an example of why crisis times are different. He argued that the large financial firms can absorb some of the volatility in asset prices and trading volumes in normal times, making the financial system appear more stable. However, during crises, the large movements in asset prices can swamp even these large players. Without their shock absorption, all of the volatility passes through to the rest of the financial system.
The problems with risk measurement and management, however, go beyond the technical and statistical problems. The continued existence of TBTF means that the people and institutions that are best placed to measure risk—banks and their investors—have far less incentive to get it right than they should. Indeed, with TBTF, risk-based capital requirements can be little more than costly constraints to be avoided to the maximum extent possible, such as by “optimizing” model estimates and portfolios to reduce measured risk under Basel II and III. However, if a credible resolution mechanism existed and failure was a realistic threat, then following the intent of bank regulations would become more consistent with the banks’ self-interest, less costly, and sometimes even nonbinding.
Progress on creating such a mechanism under Dodd-Frank has been steady, if slow. Arthur Murton of the Federal Deposit Insurance Corporation (FDIC) presented, as a part of a TBTF panel, a comprehensive update on the FDIC’s planning process for making the agency’s new Orderly Liquidation Authority functional. The FDIC’s plans for resolving systemically important nonbank financial firms (including the parent holding company of large banks) is to write off the parent company’s equity holders and then use its senior and subordinated debt to absorb any remaining losses and recapitalize the parent. The solvent operating subsidiaries of the failed firm would continue in normal operation.
Importantly, though, the FDIC may exercise its new power only if both the Treasury and Federal Reserve agree that putting a firm that is in default or in danger of default into judicial bankruptcy would have seriously adverse effects on U.S. financial stability. And this raises a key question: why isn’t bankruptcy a reasonable option for these firms?
Keynote speaker John Taylor and TBTF session panelist Kenneth Scott—both Stanford professors—argued that in fact bankruptcy is a reasonable option, or could be, with some changes. They maintain that creditors could better predict the outcome of judicial bankruptcy than FDIC-administered resolution. And predictability of outcomes is key for any mechanism that seeks to resolve financial firms with as little damage as possible to the broader financial system.
Unfortunately, some of the discussion during the TBTF panel also made it apparent that Chairman Bernanke is right: TBTF has not been solved. The TBTF panel discussed several major unresolved obstacles, including the complications of resolving globally active financial firms with substantial operations outside the United States (and hence outside both the FDIC and the U.S. bankruptcy court’s control) and the problem of dealing with many failing systemically important financial institutions at the same time, as is likely to occur in a crisis period. (A further commentary on these two obstacles is available in an earlier edition of the Atlanta Fed’s Notes from the Vault.)
Thus, the Atlanta Fed’s recent FMC highlighted both the importance of ending TBTF and the difficulty of doing so. The Federal Reserve continues to work with the FDIC to address the remaining problems. But until TBTF is a “solved” problem, what to do about these financial firms should and will remain a front-burner issue in policy circles.
By Paula Tkac, vice president and senior economist, and
Larry Wall, director of the Center for Financial Innovation and Stability, both in the Atlanta Fed’s research department
April 16, 2013
Improvement in the Outlook? The BIE Panel Thinks So
Earlier this month, Dennis Lockhart, the Atlanta Fed’s top guy, gave his assessment of the economy and monetary policy to the Kiwanis Club of Birmingham, Alabama. Here’s the essential takeaway:
There are encouraging developments in the economy, to be sure, but the evidence of sustainable momentum that will deliver “substantial improvement in the outlook for the labor market” is not yet conclusive. ... How will I, as one policymaker, determine that the balance has shifted and the time for a policy change has come? Well, one key consideration is the array of risks to the economic outlook and my degree of confidence in the outlook.
To help the boss assess the risks to the outlook, we reached out to our Business Inflation Expectations (BIE) panel to get a sense of how they view the outlook for their businesses and, notably, how they assess the risks to that outlook. Specifically, we asked:
Projecting ahead, to the best of your ability, please assign a percent likelihood to the following changes to UNIT SALES LEVELS over the next 12 months.
The table below summarizes the answers and compares them to the responses we got to this statement last November.
First, the business outlook of our panel has improved decidedly since last November. On average, our panel sees unit sales growth averaging 1.8 percent. OK, not a spectacular number, but, to our eyes at least, much improved from the 1.2 percent the group was expecting when we queried five months ago.
And how about the assessment of the risks President Lockhart indicated was also a key consideration? Here again, the sentiment in our panel appears to have shifted favorably. Last November, our panel put the likelihood that their year-ahead unit sales growth would be 1 percent or less at 50 percent. The group now puts the chances of a downshift in business activity at 37 percent. Meanwhile, the upside potential for their sales has grown. Last November, the panel put the chances of a “significant” improvement in unit sales at about 20 percent; this month, the group thinks the likelihood is 30 percent.
And this improved sentiment isn’t centered in just a few industries—it’s spread across a wide swath of the economy. Firms in construction and real estate, which were, on average, projecting 12-month unit sales growth of 1.1 percent last November, now put that growth number at 1.8 percent. The average sales outlook of general-services firms has risen from 1 percent to 2.2 percent; finance and insurance companies went from 0.5 percent to 1.3 percent; and retailers/wholesalers’ unit sales projections rose from 1.5 percent to 2 percent. And manufacturers, who posted relatively strong expectations last November, reported about the same sales outlook this month as they did five months ago.
To be clear, President Lockhart’s recent comments—and the Federal Open Market Committee statement on which they are based—indicate he is looking for a substantial improvement in the outlook for the labor market, not sales. But we’re going to assume that it’s unlikely to have one without the having the other. And is our panel’s unit sales forecast “substantially” improved? Well, what constitutes “substantial” is in the eye of the beholder, but if this isn’t a substantial improvement in the outlook, it’s certainly a move in that direction.
By Mike Bryan, vice president and senior economist, and
Nick Parker, economic research analyst, both in the Atlanta Fed’s research department
April 12, 2013
Higher Education: A Deflating Bubble?
There are at least two sides to every debate, but it’s becoming clearer by the day that the debate over the cost of higher education is being won by people like University of Tennessee law professor Glenn Reynolds.
A frequent writer and lecturer, and even more frequent blogger, Reynolds visited the Atlanta Fed recently to share his views with local community leaders. He reported that total student loan debt now stands at over $1 trillion—more than total credit card debt and auto loan debt combined. As these charts from the New York Fed show, the increase in total student debt over the past eight years is a result of greater numbers of students and families taking on educational debt as well as higher debt balances per student.
One can argue that this trend is not necessarily a bad thing. Education is an investment in human capital, and if those newly acquired skills are valued highly by employers, then going to college can be a positive net present value project, even with debt financing.
And wage data reveal that these skills are indeed valuable. As this Cleveland Fed article and chart show, the median wage for a worker with a bachelor’s degree was about 30 percent higher than that of a worker with only a high school diploma in the late 1970s and grew to more than 60 percent higher by the early 2000s. However, the data also show that over the last decade the value of a college degree measured by wages has stagnated.
And here begins the crux of Reynolds’s concern. The cost of attending college has continued to grow, and grow rapidly. Between the 2000–01 and 2010–11 academic school years, the cost of undergraduate tuition, room, and board rose 42 percent at public institutions and 31 percent at private not-for-profit institutions, after adjusting for inflation, according to the National Center for Education Statistics.
A stagnant wage premium with rising costs of attendance suggests that, at least on average, the value proposition of going to college is deteriorating. To make matters worse, Reynolds described students graduating with significant levels of student loan debt who often cannot find jobs that pay enough to cover the loan payments. Moreover, unlike credit card debt, student loans are not dischargeable in bankruptcy, meaning that there is no opportunity to get out from under the debt burden other than through full repayment. Reynolds told of individuals whose high levels of student debt are limiting their career choices, ability to obtain mortgages, and save for retirement. He even went on to say that student loans are affecting a much more personal market—the marriage market. After all, he says, “Who wants to marry someone with huge amounts of unpayable debt?”
Reynolds contends that ”something that can’t go on forever, won’t,” and he believes that seeing friends or family members having financial problems because of student loans is leading college students to become more cost conscious. Additionally, he notes that more and more of today’s students are focusing on majors that seem likely to offer a strong salary over time. The chart on 2009 enrollment and wage premiums by major show some support for that notion.
Large fractions of students are enrolled in majors with relatively higher wage premiums, including business and engineering, but there are also substantial enrollments in education, psychology, and the humanities. For Reynolds it is not so much about seeking out the highest-wage major; instead, his advice is, “Don’t go to a college that will require you to borrow a lot of money.”
What’s the endgame? Well, he expects that when the bubble bursts, there will be less “dumb money” to be gained, students will demand a higher return on investment, and schools will ultimately be forced to adapt. According to Reynolds, colleges have two different strategic choices: increase the value of the education for the current cost, or lower the cost of providing the current level of value. And he expects the most common response will be the latter, likely involving technology such as MOOCs (massive open online courses) and other innovations in teaching methods.
When any bubble bursts, there are some casualties. In this case, it may be that some colleges do not survive once market discipline has been unleashed. Given the statistics above, you might think that it would be the small liberal arts colleges that will suffer the most, but in this video, shot during the visit to Atlanta, Reynolds argues that these colleges may actually gain from the coming shakeout.
Reynolds indicated that there is change in the air, but it’s coming slowly. The bubble may not have burst, but he sees it deflating. He noted, “A lot of people hope it will pass. They’ll muddle through without dramatic changes. And frankly I hope they’re right. But I don’t think they are.”
By Paula Tkac, vice president and senior economist in the Atlanta Fed research department and
Michael Chriszt, vice president and community relations officer in the Atlanta Fed’s public affairs department
April 05, 2013
Labor Market Update: Muddy Waters Continue to Run Deep
Earlier this week, Atlanta Fed President Dennis Lockhart gave a speech in Birmingham, Alabama, focused on labor markets, risks to the outlook, and current monetary policy. One of the things President Lockhart noted was that the picture for the labor market remained muddy. Specifically:
"The fact is that conditions in the broad labor market are quite mixed. While some indicators of labor market health have improved a lot since the recession, others have not improved much at all or have even worsened. As I said, net job creation is picking up. Initial claims for unemployment insurance have fallen. But the official rate of unemployment remains high, many discouraged workers have left the labor force, and there are many people working part-time jobs who want to work full time."
Today's labor report did little to clarify improvement in labor market conditions, with March payrolls estimated to have grown by a much less than expected 88,000 workers and the jobless rate falling one-tenth of a percent, to 7.6 percent, on the back of a decline of 496,000 in the size in the labor force. Updated with today's data, below is the spider chart we have previously offered as one way to simultaneously track and visualize "conditions in the broad labor market."
As a reminder, we've taken the approach of dividing a set of 13 indicators of labor market conditions into four segments:
- Employer Behavior includes indicators related to the hiring activities of employers.
- Confidence includes indicators of employer and worker confidence in the labor market.
- Utilization includes measures related to available labor resources.
- Leading Indicators shows data that typically provide insight into the future direction of overall labor market activity.
The circle at the perimeter of this chart represents labor market conditions that existed just before the recession. We have dated this as late 2007. The inner circle represents the state of affairs when payroll employment reached its trough in late 2009. The oddly shaped red figure inside the perimeter depicts where each of the indicators was in March 2011 relative to the benchmarks. The purple figure depicts the state of the labor market in March 2012. Finally, the blue figure shows where the indicators were as of March 2013. All of the indicators are scaled so that outward movement represents improvement. The progression of these point-in-time snapshots provides us with a picture of how labor market conditions have evolved over the past four years.
As you can see, substantial improvement has arguably been achieved in the leading indicator series. As a group, these data points are approaching their prerecession levels. Employer hiring behavior and confidence are slowly moving outward but remain quite weak relative to their prerecession benchmarks. Finally, the labor utilization measures are very weak and, notably, have hardly improved at all over the past two years.
In the macroblog post that introduced the spider chart, we noted that there are a couple of immediate issues that arise in using this graphic to interpret market improvement, substantial or otherwise:
First, a variable such as the level of payroll employment will eventually exceed its pre-recession level, and grow consistently over time as the population grows. A variable like "hiring plans"—which is the net percentage of firms in the National Federation of Independent Business survey expecting to hire employees in the next three months—cannot grow without bound....
Second, it is not obvious that 2007:IVQ levels are necessarily the best benchmarks for all (or even any) of the variables we are monitoring [in the spider chart].
Of these two issues, the second one is potentially the more problematic. The spider chart invites you to think of the inner circle as the starting point and the outer circle as the "goal," or a representation of "normal" labor market conditions. In assessing improvement in variables such as payroll employment, using the prerecession level as a reference point makes some sense as a minimal standard. But for some, "minimal" may be the operative word. If we are interested in questions of how employment is doing relative to some concept of "full employment," it might be appropriate to assess the data relative to some measure of trend. For example, payroll employment is still about 3 million below the prerecession peak, but the labor force, despite the drop in March, is by about 1 million higher than before the recession. When measured relative to the size of the labor market, progress on employment is less impressive than it would appear by just looking at growth in employment itself.
One way to address both of the caveats noted above is to scale variables that involve numbers of jobs or people—such as payroll employment or the number of unemployed—by the size of the population or the labor force. Doing so ensures that variables measured in numbers of jobs or people do not grow without bound. It also helps in assessing progress in these variables relative to a (back-of-the-envelope) measure of the trend in labor resources.
We take this approach in the following chart, which reproduces the spider chart but divides the variables that are counts of people by the size of the labor force. (The labor force has grown more slowly than the population since the end of the recession, but a generally similar picture emerges if the variables are instead deflated by the population.)
Not surprisingly, for most of the indicators, labor market progress is a bit more subdued relative to postrecession growth in the labor force than growth in the indicators alone would suggest. These adjustments definitely would not alter our view that the labor market picture is "quite mixed." President Lockhart's recent comments on CNBC—in which he said he would like to see more positive data before declaring that sustained improvement has taken hold—seem especially prescient in light of today's job numbers:
By John Robertson, vice president and senior economist in the Atlanta Fed's research department, and
Dave Altig, executive vice president and research director of the Atlanta Fed
March 28, 2013
The Same-Old, Same-Old Labor Market
In his March 24 Wall Street Journal piece on declining government payrolls, Sudeep Reddy offers up a key observation:
The cuts in the public-sector workforce—at the federal, state and local levels—marked the deepest retrenchment in government employment of civilians since just after World War II... down by about 740,000 jobs since the recession ended in June 2009. At the same time, the private sector has added more than 5.2 million jobs over the course of the recovery.
As the Journal article notes, the story of shrinking government employment combining with private-sector payroll expansion has been remarkably consistent for much of the recovery.
About a year ago, we provided a graphical illustration of postrecession employment patterns using payroll-employment "bubble charts." These charts measure postrecession average monthly employment changes by sector relative to the changes in the prerecession period from December 2001 through October 2007. Not a lot in that chart has changed over the intervening year (just as not a lot had changed in 2012 compared with 2011):
The stability in the employment picture across private industries, both relative to one another and relative to the precrisis pace of job gains, is just as notable as the changing fortunes of private versus public employment. In fact, the charts offer some pretty clear impressions:
- Virtually all private-sector industries have moved into positive employment-growth territory. That movement includes the construction and financial activities sectors, which have generally lagged the improvement in the rest of the economy. The only broad category still shedding jobs in the private sector has been the information industry—which includes publishing, motion picture production, telecommunications, data processing, and the like—an industry that was also shrinking in terms of employment over the decade leading up to the financial crisis.
- All of the private-sector bubbles in the charts are now close to, on, or above the 45-degree line, meaning that the average pace of monthly job creation in each sector is near, equal to, or greater than what prevailed during the last recovery.
- As the Reddy piece emphasizes, government employment has been in decline since early 2010, though the government sector as a whole retains its status as the sector with the largest employment share. (The size of the bubbles in the charts above represents the share of employment in each sector at the end of the period for which the graph is drawn.) But the chart also illustrates another key point of Reddy's article: To date, the decline in government employment has been concentrated in state and, especially, local government jobs. Until recently, job creation by the federal government, which is relatively small in the bigger scheme of things, has not deviated much from its prerecession pattern.
The last point brings us to this observation, from the WSJ article:
How the rest of the private sector responds to a shrinking of the federal government could play a bigger role in determining how the budget fight hits the workforce.
"The private sector in the U.S. is growing so much stronger than anyone had expected," said Bernard Baumohl of the Economic Outlook Group. "This organic growth is going to significantly offset the effect of the sequester in terms of economic output and employment."
It is worth pointing out that the monthly average of 17,000 state, local, and federal government jobs lost since March 2010 has been nearly matched by average monthly increases of better than 14,000 jobs in manufacturing, a sector that persistently shed jobs in the previous recovery. The replacement of private- for public-sector employment has generated 175,000 to 185,000 net jobs per month in both 2011 and 2012. To put one perspective on that figure, at current labor force participation rates (along with some other assumptions and caveats), that pace would be sufficient to reach the Federal Open Market Committee's 6.5 percent unemployment threshold by sometime in spring 2015 (as you can verify yourself with the Atlanta Fed's Jobs Calculator). That calculation raises the stake somewhat on the matter of how "the rest of the private sector responds."
By Dave Altig, executive vice president and research director of the Atlanta Fed
March 19, 2013
Being Ahead of the Curve: Not Always a Good Thing
Our friends at the New York Fed have a nifty interactive graphic that compares the unemployment rate, labor force participation rate, and employment-to-population ratio over the last five business cycles. You can even break these indicators down by gender, by age, or by a particular business cycle. (For a deeper dive, check out this post at Liberty Street Economics by Jonathan McCarthy and Simon Potter.) And though it’s not exactly late-breaking news, no matter which of the three indicators you look at, you can’t help but conclude that the most recent recession is an outlier.
The Beveridge curve is a fourth and particularly useful graphical representation of a steady-state economy showing how, in theory, one might expect the vacancy rate to change, given an unemployment rate. It depicts the relationship between job openings and the unemployment rate. (The Atlanta Fed’s magazine, EconSouth, discussed the Beveridge curve.) It, too, has been standing out over the course of the most recent recovery, so much so that we think it warrants at least a second glance. There are a number of ways to estimate a Beveridge curve (see, for example, methods described by Gadi Barlevy of the Chicago Fed here and by the Richmond Fed’s Thomas Lubik here).
We use the method described by Barnichon et al. (2012) to estimate the solid curve used in the first chart below. The square plots represent actual vacancy rate (y-axis) and unemployment rate (x-axis) combinations by month from December 2000, when the Job Openings and Labor Turnover Statistics (JOLTS) data series from the U.S. Bureau of Labor Statistics (BLS) series begins, to January 2013, the most recent month of data available for both series.
Blue squares represent data from December 2000 to December 2009, when the “errors” between actual plots and the curve estimation were below 2 percentage points, and red squares represent data since January 2010, where data suddenly seem to jump higher than the predicted Beveridge curve to the tune of 2 percentage points or greater (see the chart below).
In June 2012, Regis Barnichon and his coauthors concluded that the unemployment rate’s lackluster performance so far in the recovery was attributable to a shortfall in hires per vacancy. Since then, the vacancy rate has climbed its way back to its June 2008 level of 2.7 percent. However, the unemployment rate has clearly not returned to either its June 2008 level (5.6 percent) or where the Beveridge curve says it should be given this vacancy rate, which one might predict to be 5.5 percent using the methodology of Barnichon et al.
This “ahead of the curve” phenomenon has not gone unnoticed and has prompted some explanations. In a March 6, 2013, article in The New York Times (which also has some cool charts), Catherine Rampell posits that available positions are staying unfilled longer, while interview processes have become lengthier.
The next day, Rampell went into more detail about why we’re going “off the curve” in a New York Times Economix post. She cites skills mismatch and a skills atrophy effect of the long-term unemployed affecting the ability of employers to fill positions (which she explains aren’t full explanations, yet we would expect to see wages for highly coveted positions rise significantly).
Rampell goes back to the explanation many of us continue to hear from business contacts: employers are unwilling to fill vacant positions because of economic and fiscal policy uncertainty. She quotes Stephen Davis of Chicago’s Booth School: “They’re taking longer to fill vacancies because they just feel less need to fill jobs now,” Davis said. “They recognize that in a slack labor market, there is an abundance of viable candidates. If something happens, and if they need to hire quickly, they know they can do that. That’s harder in a tight labor market.”
So maybe as labor markets “tighten up,” or perhaps if the speed by which they tighten up quickens, we’ll get back on the Beveridge curve. Only time, and several BLS releases, will tell.
By Patrick Higgins, an economist at the Atlanta Fed, and
Mark Carter, a senior economic analyst at the Atlanta Fed
March 11, 2013
You Say You’re a Homeowner and Not a Renter? Think Again.
As we’ve said before, we’re suckers for cool charts. The latest that caught our eye is the following one, originally created by the U.S. Bureau of Labor Statistics (BLS). It highlights the relative importance assigned to the various components of the consumer price index (CPI) and shows where increases in the index have come from over the past 12 months.
It probably won’t surprise anyone that the drop in gasoline prices (found in the transportation component) exerted downward pressure on the CPI last year, while the cost of medical care pushed the price index higher. What might surprise you is the size of that big, blue square labeled “housing.” Housing accounts for a little more than 40 percent of the CPI market basket and, given its weight, any change in this component significantly affects the overall index.
This begs the question: In light of the recent strength seen in the housing market—and notably the nearly 10 percent rise in home prices over the past 12 months—are housing costs likely to exert more upward pressure on the CPI?
Before we dive into this question, it’s important to understand that home prices do not directly enter into the computation of the CPI (or the personal consumption expenditures [PCE] price index, for that matter). This is because a home is an asset, and an increase in its value does not impose a “cost” on the homeowner. But there is a cost that homeowners face in addition to home maintenance and utilities, and that’s the implied rent they incur by living in their home rather than renting it out. In effect, every homeowner is his or her own tenant, and the rent they forgo each month is called the “owners’ equivalent rent” (or OER) in the CPI. OER represents about 24 percent of the CPI (and about 11 percent of the PCE price index). The CPI captures this OER cost (sensibly, in our view) by measuring the cost of home rentals (details here). So whether the robust rise in home prices will influence the behavior of the CPI this year depends on whether rising home prices influence home rents.
So what is likely to happen to OER given the continued increase in home prices? Well, higher home prices, in time, ought to cause home rents to rise, putting upward pressure on the CPI. Homes are assets to landlords, after all, and landlords (like all investors) require an adequate return on their investments. Let’s call this the “asset market influence” of home prices on home rents. But the rents that landlords charge also compete with homeownership. If renters decide to become homeowners, the rental market loses customers, which should push home rents in the opposite direction of home prices for a time. Let’s call this the “substitution influence” on rent prices.
Consider the following charts, which show three-month home prices and home rents (measured by the CPI’s OER measure). It’s a little hard to see a clear correlation between these two measures.
So we’ve separated these data into their trend and cycle components (using Hodrick-Prescott procedures, if you must know) shown in the following two charts. Now, if one takes the trend view, there is a clear positive relationship between home prices and home rents. This is consistent with the asset market influence described above. But also consider the detrended perspective. Here, home prices and home rents are pretty clearly negatively correlated. This, to us, looks like the substitution influence described above.
So let’s get back to the question at hand. What do rising home prices mean for OER and, ultimately, the behavior of the CPI? Well, it’s rather hard to say because the link between home prices and OER isn’t particularly strong.
Not definitive enough for you? OK, how about this: We think the recent rise in home prices will more likely lean against the rise in OER for the near term as the growing demand for home ownership provides some competition to the rental market. But, in time, these influences will give way to the asset market fundamentals, and rents are likely to accelerate as returns on real estate investments are reaffirmed.
By Mike Bryan, vice president and senior economist, and
Nick Parker, economic research analyst, both in the Atlanta Fed’s research department
March 08, 2013
Will the Next Exit from Monetary Stimulus Really Be Different from the Last?
Suppose you run a manufacturing business—let's say, for example, widgets. Your customers are loyal and steady, but you are never completely certain when they are going to show up asking you to satisfy their widget desires.
Given this uncertainty, you consider two different strategies to meet the needs of your customers. One option is to produce a large quantity of widgets at once, store the product in your warehouse, and when a customer calls, pull the widgets out of inventory as required.
A second option is to simply wait until buyers arrive at your door and produce widgets on demand, which you can do instantaneously and in as large a quantity as you like.
Thinking only about whether you can meet customer demand when it presents itself, these two options are basically identical. In the first case you have a large inventory to support your sales. In the second case you have a large—in fact, infinitely large—"shadow" inventory that you can bring into existence in lockstep with demand.
I invite you to think about this example as you contemplate this familiar graph of the Federal Reserve's balance sheet:
I gather that a good measure of concern about the size of the Fed's (still growing) balance sheet comes from the notion that there is more inherent inflation risk with bank reserves that exceed $1.5 trillion than there would be with reserves somewhere in the neighborhood of $10 billion (which would be the ballpark value for the pre-crisis level of reserves).
I understand this concern, but I don't believe that it is entirely warranted. My argument is as follows: The policy strategy for tightening policy (or exiting stimulus) when the banking system is flush with reserves is equivalent to the strategy when the banking system has low (or even zero) reserves in the same way that the two strategies for meeting customer demand that I offered at the outset of this post are equivalent.
Here's why. Suppose, just for example, that bank reserves are literally zero and the Federal Open Market Committee (FOMC) has set a federal funds rate target of, say, 3 percent. Despite the fact that bank reserves are zero there is a real sense in which the potential size of the balance sheet—the shadow balance sheet, if you will—is very large.
The reason is that when the FOMC sets a target for the federal funds rate, it is sending very specific instructions to the folks from the Open Market Desk at the New York Fed, who run monetary policy operations on behalf of the FOMC. Those instructions are really pretty simple: If you have to inject more bank reserves (and hence expand the size of the Fed's balance sheet) to maintain the FOMC's funds rate target, do it.
To make sense of that statement, it is helpful to remember that the federal funds rate is an overnight interest rate that is determined by the supply and demand for bank reserves. Simplifying just a bit, the demand for reserves comes from the banking system, and the supply comes from the Fed. As in any supply and demand story, if demand goes up, so does the "price"—in this case, the federal funds rate.
In our hypothetical example, the Open Market Desk has been instructed not to let the federal funds rate deviate from 3 percent—at least not for very long. With such instructions, there is really only one thing to do in the case that demand from the banking system increases—create more reserves.
To put it in the terms of the business example I started out with, in setting a funds rate target the FOMC is giving the Open Market Desk the following marching orders: If customers show up, step up the production and meet the demand. The Fed's balance sheet in this case will automatically expand to meet bank reserve demand, just as the businessperson's inventory would expand to support the demand for widgets. As with the businessperson in my example, there is little difference between holding a large tangible inventory and standing ready to supply on demand from a shadow inventory.
Though the analogy is not completely perfect—in the case of the Fed's balance sheet, for example it is the banks and not the business (i.e., the Fed) that hold the inventory—I think the story provides an intuitive way to process the following comments (courtesy of Bloomberg) from Fed Chairman Ben Bernanke, from last week's congressional testimony:
"Raising interest rate on reserves" when the balance sheet is large is the functional equivalent to raising the federal funds rate when the actual balance sheet is not so large, but the potential or shadow balance sheet is. In both cases, the strategy is to induce banks to resist deploying available reserves to expand deposit liabilities and credit. The only difference is that, in the former case, the available reserves are explicit, and in the latter case they are implicit.
The Monetary Policy Report that accompanied the Chairman's testimony contained a fairly thorough summary of costs that might be associated with continued monetary stimulus. Some of these in fact pertain to the size of the Fed's balance sheet. But, as the Chairman notes in the video clip above, when it comes to the mechanics of exiting from policy stimulus, the real challenge is the familiar one of knowing when it is time to alter course.
By Dave Altig, executive vice president and research director of the Atlanta Fed
March 01, 2013
What the Dual Mandate Looks Like
Sometimes simple, direct points are the most powerful. For me, the simplest and most direct points in Chairman Bernanke’s Senate testimony this week were contained in the following one minute and 49 seconds of video (courtesy of Bloomberg):
At about the 1:26 mark, the Chairman says:
So, our accommodative monetary policy has not really traded off one of [the FOMC’s mandated goals] against the other, and it has supported both real growth and employment and kept inflation close to our target.
To that point, here is a straightforward picture:
I concede that past results are no guarantee of future performance. And in his testimony, the Chairman was very clear that prudence dictates vigilance with respect to potential unintended consequences:
Highly accommodative monetary policy also has several potential costs and risks, which the committee is monitoring closely. For example, if further expansion of the Federal Reserve's balance sheet were to undermine public confidence in our ability to exit smoothly from our accommodative policies at the appropriate time, inflation expectations could rise, putting the FOMC's price stability objective at risk...
Another potential cost that the committee takes very seriously is the possibility that very low interest rates, if maintained for a considerable time, could impair financial stability. For example, portfolio managers dissatisfied with low returns may reach for yield by taking on more credit risk, duration risk, or leverage.
Concerns about such developments are fair and, as Mr. Bernanke makes clear, shared by the FOMC. Furthermore, the language around the Fed’s ultimate decision to end or alter the pace of its current open-ended asset-purchase program is explicitly cast in terms of an ongoing cost-benefit analysis. But anyone who wants to convince me that monetary policy actions have been contrary to our dual mandate is going to have to explain to me why that conclusion isn’t contradicted by the chart above.
By Dave Altig, executive vice president and research director of the Atlanta Fed