The Atlanta Fed's macroblog provides commentary on economic topics including monetary policy, macroeconomic developments, financial issues and Southeast regional trends.

Authors for macroblog are Dave Altig and other Atlanta Fed economists.

« April 2013 | Main | June 2013 »

May 30, 2013

At Least One Reason Why People Shouldn't Hate QE

You might not expect me to endorse an article titled "The 7 Reasons Why People Hate QE." I won't disappoint that expectation, but I will say that I do endorse, and appreciate, the civil spirit in which the author of the piece, Eric Parnell, offers his criticism. We here at macroblog, like our colleagues in the Federal Reserve System more generally, pride ourselves on striving for unfailing civility, and it is a pleasure to engage skeptics who share (and exhibit) the same disposition. What the world needs now is...well, maybe I'm getting carried away.

Let me instead appropriate some of Mr. Parnell's language. It is worthwhile to explore some of the reasons that people do not like QE from someone who does not share this opposing sentiment. In particular, let me focus on the first of seven reasons offered in the Parnell post:

First, a primary objection I have with QE is that it results in a government policy making and regulatory institution in the U.S. Federal Reserve directly determining how private sector capital is being allocated... in recent years, the Fed has dramatically expanded its policy scope into areas that are normally the territory of fiscal policy. This has included specifically targeting selected areas of the economy such as the U.S. housing market including the aggressive purchase of mortgage backed securities (MBS) since the outbreak of the financial crisis.

This statement seems to presume that monetary policy does not normally have differential impacts across distinct sectors of the economy. I think this presumption is erroneous.

The Federal Open Market Committee's (FOMC) asset purchase programs have long been seen as operating through traditional portfolio-balance channels. As explained by Fed Chairman Ben Bernanke in an August 2010 speech that set up the "QE2" program:

The channels through which the Fed's purchases affect longer-term interest rates and financial conditions more generally have been subject to debate. I see the evidence as most favorable to the view that such purchases work primarily through the so-called portfolio balance channel, which holds that once short-term interest rates have reached zero, the Federal Reserve's purchases of longer-term securities affect financial conditions by changing the quantity and mix of financial assets held by the public. Specifically, the Fed's strategy relies on the presumption that different financial assets are not perfect substitutes in investors' portfolios, so that changes in the net supply of an asset available to investors affect its yield and those of broadly similar assets. Thus, our purchases of Treasury, agency debt, and agency MBS likely both reduced the yields on those securities and also pushed investors into holding other assets with similar characteristics, such as credit risk and duration. For example, some investors who sold MBS to the Fed may have replaced them in their portfolios with longer-term, high-quality corporate bonds, depressing the yields on those assets as well.

I think this is a pretty standard way of thinking about the way monetary policy works. But you need not buy the portfolio-balance story in full to conclude that even traditional monetary policy operates on "selected areas of the economy such as the U.S. housing market." All you need to concede is that policy works by altering the path of real interest rates and that not all sectors share the same sensitivity to changes in interest rates.

Parnell goes on to discuss other problems with QE: stress put on individuals living on fixed incomes, the promotion of (presumably excessive) risk-taking, and the general distortion of market forces. All topics worthy of discussion, and if you read the minutes of almost any recent FOMC meeting you will note that they are indeed key considerations in ongoing deliberations.

These issues, however, are not about QE per se, but about monetary stimulus generally and the FOMC's interest rate policies specifically. As the conversation turns to if, when, and how Fed policymakers will adjust the current asset purchase program, it will be important to clarify the distinction between QE and the broader stance of policy.

Photo of Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed


May 30, 2013 in Federal Reserve and Monetary Policy, Monetary Policy | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference At Least One Reason Why People Shouldn't Hate QE:


Sorry, but I think that there is an important difference between conventional monetary policy and current QE. When the Fed buys treasuries only, it is essentially dealing in state assets on both sides of its balance sheet, so any difference in the effects of monetary policy on different sectors of the economy are accidental. When the Fed buys mortgage securities, however, that deliberately, though the asset side of the Fed's balance sheet, favours housing activity. And I have little doubt that, if the US economy did not strengthen as fast as required, the Fed would end up, like the Bank of Japan, buying stocks. In my view (as a former central banker), the Fed has done too little to resist being drawn, by ill-informed politicians, into unsustainable stimulation of popular real economic activity.

Posted by: RebelEconomist | June 06, 2013 at 05:43 PM

I appreciate and agree with your narrow response to the column, "The 7 reasons why people hate QE."  However, it appears that people still have several more (unanswered) reasons for hating QE.  

It is an old debating tactic to take issue with the 1-2 weakest points of an opponent's otherwise strong argument to create the impression that the opponent is altogether wrong.  But debating tactics don't do anything to 'fix' monetary policy or the economy, so ultimately that is not a wise approach for the Fed (its officers) to take.  Unlike high school debates or courtroom arguments designed to persuade an uninformed jury, the 'judges' are monetary economists and money managers who recognize the difference between debating tactics and a response that goes to the core of the issue.

For example, your response to the article's criticism #1 was "even traditional monetary policy operates on "selected areas of the economy such as the U.S. housing market."  That is absolutely true.  But that neither recognizes nor explains why half of the Fed's current open market operations are conducted in mortgage backed securities and related debt instruments.  Aren't those particular bond purchases PURPOSELY geared toward the housing market to the exclusion of other sectors of the economy?

Of course.  That's why your (correct as far as it goes) comment about monetary policy doesn't really address the first complaint of the columnist.  You leave the impression that QE is little different than standard open market operations, though of course differing in magnitude.

I hadn't read the "7 Reasons" column before you cited it in your piece, but after having read it, I was most persuaded by reason #2:

     #2 - Helping Some Market Participants At The Expense Of Others.
     By effectively locking interest rates at 0% since December 2008,
     the Federal Reserve has elected to provide direct and generous support
     to financial institutions and risk takers, some of which directly
     contributed to the cause of the crisis.

I believe that's an accurate description of how things have worked out, though I don't believe it portrays the Fed's motives or reasoning.  Nevertheless, it is incumbent upon Fed officials to consider this criticism to avoid future crises, economic downturns and taxpayer bailouts.

I've recently been reading (and learning from) Nicholas Dunbar's book "The Devil's Derivatives."  For at least the past 50 years, a pattern has emerged whereby well-compensated (highly motivated) bankers develop/discover ways of avoiding and evading the Fed's regulations, followed by the Fed's efforts to regulate the new activities, followed by further work-arounds by bankers.  This is a natural process, but the Fed is playing with both hands behind its back because a) top Fed officials are typically free-market economists with a philosophical appreciation for innovation and a general skepticism of policies which have unintended negative consequences, and b) bankers spend vast amounts of money to hire PhD economists and other smart, experienced people --- then their teams work night and day for months to develop innovative products that Fed officials don't understand and cannot effectively regulate.

This is part of the Regular Business Plans of big banks, not something that has inadvertently happened a time or two.  The not-infrequent outcome of these innovations is to create bubbles which eventually burst, placing major economic sectors at risk.  The next act in the play is a Fed rescue/bailout to "save the economy" --- but then Fed officials explain (with sad faces and shrugging shoulders) that the bad actors had to be saved to avoid another Great Depression.

The bailouts, too, are part of the long-term business plans of the big banks.  The problem is that the Fed doesn't get the joke, and continues playing the same role over and over.  As a historical fact, the Fed DOES provide aid and comfort to the major financial institutions at the expense of taxpayers, households and small business.  (How many times has the Fed saved Citi over the past 50 years?) 

As I said earlier, I do not for a minute believe this is the Fed's intentions: it occurs because the innovative bankers know how draw Fed officials into a game they are ill-equipped to play.  Fed officials aren't in the hip pockets of the big financial institutions because they're corrupt, but because they're ignorant: uninformed and inexperienced.

Now, the Fed can continue down this path ... or its officials could reflect on the pattern that has emerged over the decades and ask whether their appreciation for innovation is well-founded, whether the Fed has been an effective regulator when it has always been behind the curve of innovation, and where all of this is leading: too much leverage, moral hazard, huge risks to America's future economic prosperity, etc.

The strongest argument that bankers make for justifying their innovative activities is that "if we aren't allowed to do it, financial markets will move offshore ... but then same practices will occur anyway." 

That is a nonsense argument.  If America reigns in the profligate bankers, so will most of our closest trading partners.  Second, even if the big US banks became the US branches of foreign banks, the US economy would still receive financing and Americans would still have jobs working in those branches.  Third, future bailouts would fall to a far greater extent on the backs of foreign taxpayers rather than US taxpayers.  Fourth, if the Fed calls the bluff of big banks, the ability of bankers to extract future handouts would be far less (less moral hazard).

Fed officials have done a pretty good job over the past century perfecting its monetary policy tools. At the same time, however, they have been so focused on shorter-term issues that they have failed to appreciate the longer-term game the Fed has been drawn into, where it has become the enabler and protecter of institutions whose prosperity is not essential to the functioning of a modern economy.  The necessity of an efficient banking system does not prove (or even imply) that specific banks are must survive.  The only too-big-to-fail institution is the Fed.

The Fed's original job was to protect the economy by PREVENTING financial crises and panics.  We now know that despite the best of intentions the Fed has failed in that responsibility.  It has unwittingly become a tool of the banking sector, facilitating astronomically high compensation and the accumulation of great wealth --- for bankers --- just #2 of the "7 Reasons" essay claims.

Either the Fed can stay on the merry-go-round or get off of it ... but it can't stay on the merry-go-round and expect to arrive in a new destination one or two cycles hence.  Anyone who has been paying attention knows that. 

The failure of the economy to recover despite the Fed adding $2 trillion in reserves to the banking system means that people do not trust the Fed's current policies to protect their jobs and wealth in the future --- so rather than taking risk and contributing to the economy, they're paying off debt and building up reserves for the next collapse.

Posted by: Thomas Wyrick | June 10, 2013 at 01:57 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 23, 2013

A Subtle View of Labor Market Improvement

In a speech delivered Tuesday to the Japan Society in New York City, Federal Reserve Bank of New York President William Dudley offered his view on how he might assess the appropriate pace of the Federal Open Market Committee's (FOMC) current $85 billion per month asset purchase program:

Let me give a few examples of how my own thinking may evolve. In terms of our asset purchase program, I believe we should be prepared to adjust the total amount of purchases to that needed to deliver a substantial improvement in the labor market outlook in the context of price stability. In doing this, we might adjust the pace of purchases up or down as the labor market and inflation outlook changes in a material way. For me, the base case forecast is not the sole consideration—how confident we are about that outcome is also important.

Because the outlook is uncertain, I cannot be sure which way—up or down—the next change will be. But at some point, I expect to see sufficient evidence to make me more confident about the prospect for substantial improvement in the labor market outlook. At that time, in my view, it will be appropriate to reduce the pace at which we are adding accommodation through asset purchases. Over the coming months, how well the economy fights its way through the significant fiscal drag currently in force will be an important aspect of this judgment.

My own boss, Atlanta Fed President Dennis Lockhart, expressed a similar view in a speech to the Birmingham, Alabama, Kiwanis Club last month:

The key word in the phrase "substantial improvement in the outlook for the labor market" is outlook. For my part, a critical consideration in judging how much longer asset purchases should continue will be confidence in the positive outlook. Confidence that is solidly grounded in improving economic data, accumulated over a sufficient span of time, will help me conclude that the work of the large-scale asset purchase program, as a temporary supplement to conventional interest-rate policy, is complete.

And there is this, from the minutes of the latest meeting of the FOMC (emphasis added):

Participants also touched on the conditions under which it might be appropriate to change the pace of asset purchases. Most observed that the outlook for the labor market had shown progress since the program was started in September, but many of these participants indicated that continued progress, more confidence in the outlook, or diminished downside risks would be required before slowing the pace of purchases would become appropriate.

Neither President Dudley nor President Lockhart (nor the minutes) indicates where we are on the confidence scale at the moment. But at least outside the Fed, there is some evidence confidence in the labor market forecast is increasing. The following chart shows year-over-year averages of the interquartile range of four-quarter-ahead unemployment rate forecasts from the Federal Reserve Bank of Philadelphia's Survey of Professional Forecasters:


The interquartile range is essentially the difference between the most optimistic one-fourth of the forecasts in the Philadelphia Fed's panel and the most pessimistic one-fourth of forecasts. It is thus a measure of dispersion, or forecast disagreement.

The trend in this measure of forecast disagreement is clearly—very clearly—downward. That doesn't exactly say that each individual forecaster is becoming more confident about his or her individual outlook (though this type of dispersion measure is often used as a proxy for overall uncertainty). Even less does it mean that forecast uncertainty has fallen to the level President Dudley, President Lockhart, or any other Fed official would deem sufficient to alter policy in any way. The FOMC minutes, for example, include this...

A number of participants expressed willingness to adjust the flow of purchases downward as early as the June meeting if the economic information received by that time showed evidence of sufficiently strong and sustained growth; however, views differed about what evidence would be necessary and the likelihood of that outcome.

... and following his congressional testimony Wednesday, Chairman Bernanke engaged in a Q&A, and ABC News summed up the state of the policy discussion this way:

When asked by Kevin Brady, the Panel's chairman, whether the Federal Reserve could start winding it back before before September's Labour Day holiday, Mr Bernanke responded, "I don't know. It's going to depend on the data."

It really need not be emphasized that I don't know either. But the narrowing of opinions of where things are headed must signify some sort of progress.

Photo of Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed


May 23, 2013 in Employment, Federal Reserve and Monetary Policy, Labor Markets, Monetary Policy | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference A Subtle View of Labor Market Improvement:


The units in your chart are not displayed correctly. The "10" should be "1.0"

Posted by: glenn | May 24, 2013 at 10:52 AM

Thanks Glenn. Actually this is a case of not labeling/explaining things precisely. I used the Philadelphia Fed's D3 measure of dispersion, which is the log difference of the levels. So the units are the percent differences between the 75th percentile and the 25th percentile.

Posted by: Dave | May 30, 2013 at 03:19 PM


Posted by: Aldex | July 01, 2014 at 07:23 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 16, 2013

Labor Costs, Inflation Expectations, and the Affordable Care Act: What Businesses Are Telling Us

The Atlanta Fed’s May survey of businesses showed little overall concern about near-term inflation. Year-ahead unit cost expectations averaged 2 percent, down a tenth from April and on par with business inflation expectations at this time last year.

OK, we’re going to guess this observation doesn’t exactly knock you off your chair. But here’s something we’ve been keeping an eye on that you might find interesting. When we ask firms about what role, if any, labor costs are likely to play in their prices over the next 12 months, an increasing proportion have been telling us they see a potential for upward price pressure coming from labor costs (see the chart).

To investigate further, we posed a special question to our Business Inflation Expectations (BIE) panel regarding their expectations for compensation growth over the next 12 months: “Projecting ahead over the next 12 months, by roughly what percentage do you expect your firm’s average compensation per worker (including benefits) to change?”

We got a pretty large range of responses, but on average, firms told us they expect average compensation growth—including benefits—of 2.8 percent. That’s about a percent higher than the average over the past year (as estimated by either the index of compensation per hour or the employment cost index). But a 2.8 percent rise is also about a percentage point below average compensation growth before the recession. We’re included to read the survey as a confirmation that labor markets are improving and expected to improve further over the coming year. But we’re not inclined to interpret the survey data as an indication that the labor market is nearing full employment.

We’ve also been hearing more lately about the potential for the Affordable Care Act (ACA) to have a significant influence on labor costs and, presumably, to provide some upward price pressure. Indeed, several of our panelists commented on their concern about the influence of the ACA when they completed their May BIE survey. So can we tie any of this expected compensation growth to the ACA, a significant share of which is scheduled to go into effect eight months from now?

Because a disproportionate impact from the ACA will fall on firms that employ 50 or more workers, we separated our panel into firms with 50 or more employees, and those employing fewer than 50 workers. What we see is that average expected compensation growth is the same for the bigger employers and smaller employers. Moreover, the big firms in our sample report the same inflation expectation as the smaller firms.

But the data reveal that the bigger firms are a little more uncertain about their unit cost projections for the year ahead. OK, it’s not a big difference, but it is statistically significant. So while their cost and compensation expectations are not yet being affected by the prospect of the ACA, the act might be influencing their uncertainty about those potential costs.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist,

Photo of Brent MeyerBrent Meyer, economist, and

Photo of Nicholas ParkerNicholas Parker, senior economic research analyst, all in the Atlanta Fed’s research department

May 16, 2013 in Business Inflation Expectations, Economics, Health Care, Inflation Expectations, Labor Markets, Pricing | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Labor Costs, Inflation Expectations, and the Affordable Care Act: What Businesses Are Telling Us:


Maybe we're finally reaching the point where firms can no longer expropriate productivity gains. If you look at the total hourly compensation for non-supervisory workers vs. productivity, the last 40 years have more or less seen the gains made during the Great Compression utterly obliterated. Now that we're back to Gilded-Age levels of income distribution, it may be that we've reached an equilibrium.

Posted by: Valerie Keefe | May 19, 2013 at 12:22 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 13, 2013

Labor Force Participation and the Unemployment Threshold

On Friday, my colleague Julie Hotchkiss shared in this space the results of her new research (with Fernando Rios-Avila, a Georgia State University colleague) on the recent and prospective behavior of the labor force participation rate (LFPR). The punch line, from my point of view, is this:

Our results suggest that relative to the average LFPR over the years 2010–12, the average LFPR over the years 2015–17 will rise by about a third of a percentage point—again, if the labor market returns to prerecession conditions. (Italics original)

As Julie notes:

[T]he Federal Open Market Committee has substantially raised the stakes on disentangling...movements in labor force introducing into its policy deliberations concepts like unemployment thresholds and qualitative assessments on "substantial" labor market improvement.

Though the meaning of "substantial labor market improvement"—a condition for adjusting the FOMC's current large-scale asset purchase program—is somewhat ambiguous, the unemployment threshold for considering moving the federal funds rate off the near-zero mark is less so. As the Committee indicated in its May press release:

[T]he Committee decided to keep the target range for the federal funds rate at 0 to ¼ percent and currently anticipates that this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent, inflation between one and two years ahead is projected to be no more than a half percentage point above the Committee's 2 percent longer-run goal, and longer-term inflation expectations continue to be well anchored.

It is widely understood (a sign of the times, no doubt) that changes in the unemployment rate are not entirely independent of what is happening with the participation rate. We have discussed this issue before here in macroblog. But in light of the new research coming from our own shop (and other research cited in Julie's post), it seems like a good time for a refresher.

First, a step back. Multiple upward revisions to the employment situation since the December jobs report—you can follow the trail courtesy of Calculated Risk here, here, here, here, and here—have led to a more robust picture of the labor market than certainly I was thinking. Here is what the record looks like for most of the recovery:

With an assist from the Atlanta Fed Jobs Calculator, we can provide further perspective on these numbers. In particular, under the assumption that the labor force participation rate will remain at its current level of 63.3 percent (among other things held constant), we can map the recent job growth numbers to a rough date when the unemployment rate will reach 6½ percent.

That looks interesting, but then taking the Hotchkiss and Rios-Avila research onboard means the assumption of a constant labor force participation rate may not be justified. So, turning again to the Jobs Calculator, the following table answers this question: If we continue on the 208,000-per-month pace of job creation of the last six months, and the labor force participation rate is X, what would the unemployment rate be by June of next year? For reference, the first row of the table replicates the earlier result under the assumption that the participation rate will maintain its current level; the second row takes into account the Hotchkiss and Rios-Avila research; and the third assumes an even larger bounce back in participation:

It is probably worth noting that the full increase in the Hotchkiss and Rios-Avila estimates happens in the 2015–17 timeframe, raising the interesting possibility that the threshold for considering interest rate increases could occur sometime before the unemployment rate moves back above the threshold.

Also, it is not at all obvious that rising labor force participation would necessarily arrive along with a rising unemployment rate. From 1996 through 1999, for example, the participation rate rose by nearly by 0.7 percentage point (the difference between the rates in the first and third rows in the table above), even as the unemployment rate fell by just over 1½ percentage points. The key was the strong employment growth over that period—almost 260,000 payroll jobs per month on average.

All of that, as should be clear by now, embeds a whole bunch of assumptions, which may make this the most important part of the FOMC's decision criteria:

In determining how long to maintain a highly accommodative stance of monetary policy, the Committee will also consider other information, including additional measures of labor market conditions...

Photo of Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed

May 13, 2013 in Employment, Labor Markets, Monetary Policy | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Labor Force Participation and the Unemployment Threshold:


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 10, 2013

Behavior’s Place in the Labor Force Participation Rate Debate

It's not often that the mainstream media is interested in the nuances of labor market statistics, so last week’s debate over the meaning of labor force participation rates (LFPR) in the pages of the Washington Post and the Wall Street Journal was music to this labor economist's ears.

Sparked by an article by Ben Casselman in his April 29 Wall Street Journal Outlook column, the ensuing back and forth (here, here, here, and here) between Casselman and the Post’s Jim Tankersley focused on what has become a central preoccupation in assessing the likely course of the labor market: Is the recent decline in the labor force participation rate the result of structural factors (e.g., an aging population) or cyclical ones (such as weak economic conditions)? Almost contemporaneously, Bill McBride declared in his recent Calculated Risk blog, "…most of the [recent] decline in the participation rate was due to changing opposed to economic weakness."

The changing pattern of labor force participation has been a topic of discussion among economists for some time—for example, see my Federal Reserve Bank of Atlanta Economic Review article—and both Tankersley and Casselman agree that the long-run secular decline in participation is a matter worthy of independent concern. But the Federal Open Market Committee has substantially raised the stakes on disentangling longer-run trends from short-run cyclical (and presumably temporary) movements in labor force participation. It’s done this by introducing into its policy deliberations concepts like unemployment thresholds and qualitative assessments on “substantial” labor market improvement.

Casselman, in an October 2012 WSJ article, cites work by my colleagues at the Chicago Fed, who find that while more than two-thirds of the decline in LFPR between 1999 and 2011 is accounted for by changes in the age distribution of the population, "…over the 2008-2011 period...only one-quarter of the...decline of actual LFPR...can be attributed to demographic factors."

This conclusion—that three-quarters of the decline in the LFPR since the beginning of the Great Recession can be attributed to cyclical factors—is supported by other research. Colleagues at the Kansas City Fed and at the Board of Governors concur that the vast majority of the decline in the LFPR since 2008 is the result of cyclical factors. Even economists outside the Federal Reserve System acknowledge the significant role of cyclical factors in the LFPR decline (for example, see the analysis by economists at the Deutsche Bank).

But there is a critical third piece to the LFPR puzzle that most of these studies ignore. In addition to changing demographics (which have, for example, been associated with a rising share of retirement-age individuals in the total population) and cyclical effects (for example, the tendency for participation to fall when wage growth is tepid or job opportunities scarce), there are also behavioral changes afoot—a point Casselman makes in his final installment of the Post/WSJ debate. For example, individuals of near-retirement age may extend their participation as a result of significant, unexpected declines in wealth. Or women with young children—a demographic group typically less likely to participate in the labor market—may increase participation if a partner loses a job during an economic downturn. In both cases, participation rates for these demographic groups would not fall by as much as expected in response to high unemployment rates alone.

Work that I've done with Fernando Rios-Avila, a colleague at Georgia State University, finds that more than 100 percent of the fall in the LFPR since 2008 is accounted for by the condition of the labor market (cyclical factors), but these particularly strong cyclical forces were countered by increased tendencies to participate (behavioral changes). In other words, if individuals hadn't stepped up to the plate and exhibited even stronger labor force participation behavior than before the recession, the LFPR would be even lower than it is.

To illustrate the role that changing behavior played in the LFPR decline during the Great Recession, the chart below illustrates how this decline can be separated into a trend component (demographics), a cyclical component (strength of the labor market), and a behavioral component. The solid black line reflects the actual LFPR in March of each year calculated using the Current Population Survey, which is the survey data used by the U.S. Bureau of Labor Statistics to calculate the monthly labor force statistics. The orange line reflects the trend estimate of the LFPR using only demographic data (such as the age distribution of the population) through 2007, projecting out to 2012. As many others have pointed out, changing demographics—the aging of the baby-boom generations, if you will—explains only about 30 percent (in this example) of the actual post-2007 decline in LFPR.

But the chart also reveals something that may be underappreciated. Including a measure of labor market conditions in the projection of the LFPR, as well as a depiction of prerecession behavior (the green line), indicates that the LFPR should be much lower than it actually is. The message from this exercise is that the actual LFPR in 2012 was above what would have been projected had each demographic group exhibited the same labor force participation behavior after the recession as before the recession.

As it turns out, women, ethnic minorities, older people, and individuals with small children were much more likely to participate in the labor market after the recession than before it. These workers are often referred to as "added workers," or workers who join the labor force to make up for lost income elsewhere in the household. As I noted above, if these demographic groups had not increased their participation in the labor force, the aggregate LFPR would be much lower than it is.

What the chart tells us is that the cyclical factors affecting labor force participation are even more important than generally imagined. However, it is also true that the inevitable march of time will continue to put powerful downward pressure on labor force participation. Indeed, our research predicts only a modest rise in the LFPR if labor markets rebound to prerecession conditions. Our results suggest that relative to the the average LFPR over the years 2010–12, the average LFPR over the years 2015–17 will rise by about a third of a percentage point—again, if the labor market returns to prerecession conditions. Though higher than today, this level would still leave the LFPR considerably lower than it was before the recession, primarily reflecting the continued downward pressures of aging baby boomers.

Photo of Julie HotchkissBy Julie Hotchkiss, a research economist and policy adviser in the Atlanta Fed’s research department

May 10, 2013 in Economics, Labor Markets | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Behavior’s Place in the Labor Force Participation Rate Debate:


Your analsysis is right. In 2008, we published in the Journal of Applied Economic Sciences a paper that predicted the LFPR fall in 2010 ( We based our prediction on demography. Currently, the same model shows that the LFPR should return to 64% in 2013-2014 ( .

Posted by: kio | May 14, 2013 at 04:41 AM

That there are wildly differing takes on the underlying cause for the decline in the LFPR is no big deal. But when Fed economists differ so drastically something seems amiss. See, for example, the linked Philly Fed article from Nov. 2013, which attributes the entire drop in the LFPR since the beginning of 2012 to demographics.

Posted by: Peter Thom | May 10, 2014 at 07:06 AM

LFPR is 62.8 in April 2014, a 36 year low mark, not since 1978. If the LFPR were that of April 2000, then U3 unemployment rate would be around 12.5%. over 20 million workers would be unemployed instead of 9.753 million. That said, Japan, Italy, France and Germany have much lower rates. I wonder if the median household living costs, basic needs, has pushed the LFPR up in this country. The Economic Policy Institute publishes a Basic Family Budget Calculator showing Topeka, Kansas, at the median, a four person household needs $63,364 to get by. The median income for a working age family is $63,967 in 2010, down from $69,233 in 2000, a drop of 7.6%. (This from State of Working America, Income) Half of working age families, therefore, do not have the basic income to meet their basic family expenses. That would drive up LFPR. Though the U.S. has the highest disposable income per capita, $40,045, it does not distribute well the abundance, driving more than would be normal into the labor force. What would be normal? Preferences change as do needs.

Posted by: Ben Leet | May 13, 2014 at 02:51 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 09, 2013

Weighing In on the Recent Discrepancy in the Inflation Statistics

Recently, there has been a divergence between inflation as measured by the Consumer Price Index (CPI) and the preferred inflation measure of the Federal Open Market Committee (FOMC), which is the price index for personal consumption expenditures (PCE). That divergence is fairly evident in the “core” measures of these two price statistics shown in the chart below.

This strikes us (and others, like Reuters’ Pedro da Costa) as a pretty significant development. The core CPI is telling us that the underlying inflation trend is still holding reasonably close to the FOMC’s longer-term target of 2 percent. But the behavior of the core PCE is rather reminiscent of 2010, when the inflation statistics slid to uncomfortably low levels—a contributing factor to the FOMC’s adoption of QE2. Which of these inflation statistics are we to believe?

Part of the divergence between the two inflation measures is due to rents. Rents are rising at a good pace right now, and since it’s pretty clear that the CPI over-weights their influence, we might be inclined to dismiss some part of the CPI’s more elevated signal. But then there are all those “non-market” components that have been pulling the PCE inflation measure lower—and these aren’t in the CPI. These are components of the PCE price index for which there are no clearly observable transaction prices. They include the “cost” of services provided to households by nonprofit organizations, or the benefits households receive that can only be imputed (i.e., that “free” checking account your bank provides if you maintain a high balance.) Since we can’t really observe the price of these things, we’d probably be inclined to dismiss their influence on PCE the inflation measure. But we’ve done the math, and the impact of these two influences accounts for only about a third of the recent gap between the core PCE and the core CPI inflation measures. Most of the disagreement between the two inflation estimates is coming from elsewhere.

We could continue to parse, item by item, all the various components and weights of the two statistics to get to the bottom of this discrepancy. But in the end, such an accounting exercise would merely tell us why the gap between the two measures has emerged, not which measure is giving the best signal of emerging inflation trends.

As an alternative approach, we thought we’d let the data speak for themselves and search for a common trend that runs through the detailed price data. What we have in mind is to compute the “first principal component” of the disaggregated data used to calculate the CPI and the PCE price indexes. The first principal component is a weighting of the data that explains as much of the data variation as possible. So, in effect, the detailed price data in each price index are being reweighted in a way that reveals their most commonly shared trend, and not by their share of consumer expenditure.

The chart below shows the 12-month trend of the first principal component derived from the 45 CPI components used in the computation of the Federal Reserve Bank of Cleveland’s median CPI, and the first principal component derived from the 177 components used in the computation of the Federal Reserve Bank of Dallas’s trimmed-mean PCE. (These are the most detailed component price data we could easily get our hands on.)

So what do we make of this picture? Well, three things:

First, inflation as measured by the PCE price index has tended to track about 0.25 percentage point under inflation as measured by the CPI over time. So part of the gap between the two inflation measures appears to be a long-term feature of the two inflation statistics.

Second, the first principal components of both the CPI and the PCE data have been persistently under their precrisis averages. In the case of the PCE measure, the first principal component is under the FOMC’s 2 percent target (a point that has not gone unnoticed by Paul Krugman).

A third takeaway from the chart is that the “disinflation” pattern traced out by these principal components has been gradual and modest—much more so than what the core PCE has recently indicated and what the data were telling us back in 2010.

Does that mean we should ignore the recent disinflation being exhibited in the core PCE inflation measure? Well, let’s put it this way: If you’re a glass-half-full sort, we’d say that the recent disinflation trend exhibited by the PCE price index doesn’t seem to be “woven” into the detailed price data, and it certainly doesn’t look like what we saw in 2010. But to you glass-half-empty types, we’d also point out that getting the inflation trend up to 2 percent is proving to be a curiously difficult task.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist,

Photo of Pat HigginsPat Higgins, economist,

Photo of Brent MeyerBrent Meyer, economist, and

Photo of Nicholas ParkerNicholas Parker, senior economic research analyst, all in the Atlanta Fed’s research department

May 9, 2013 in Business Inflation Expectations, Economics, Inflation, Pricing | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Weighing In on the Recent Discrepancy in the Inflation Statistics:


No, the correct takeaway is the the focus should be on nominal gdp, which is the number that we know with significantly more certainty. There is no single explanation for why CPI, the GDP deflator, and PCE diverge (the principal components are not likely to be stable through time). Sometimes the answer is rents, sometimes its import prices, sometimes the answer is the various weights. all of the above.

Posted by: dwb | May 10, 2013 at 09:48 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 03, 2013

Building a Better Jobs Calculator: Choose Your Own Payroll/Household Employment Ratio

To provide even greater flexibility, the Federal Reserve Bank of Atlanta's Jobs Calculator has been enhanced to allow the user to adjust another statistic used in the calculations. The statistic is the ratio of payroll to household employment and is a necessary component that links the target unemployment rate with the resulting required payroll employment growth.

The fact that estimates of payroll and household employment numbers reported by the U.S. Bureau of Labor Statistics (BLS) differ each month received a lot of attention in the fall of 2012 when, in October, the BLS reported a whopping 0.3 percentage point drop in the unemployment rate, accompanied by a rather tepid growth of 114,000 jobs in payroll employment. The culprit in that apparent incongruity is that the Household Survey (from where we get the unemployment rate) reported a gain of 873,000 jobs. That particular employment report (and its divergent statistics) received extra attention since it was the last employment report before the November 2012 election.

As Atlanta Fed Research Director Dave Altig pointed out at the time (in this blog post) and as others discussed (here and here), the two most important measures of labor market conditions come from two different surveys—the Establishment Survey, which produces the payroll employment number from the Current Employment Statistics (CES) program, and the Household Survey, which produces the unemployment rate from the Current Population Survey (CPS). Both surveys claim to estimate the number of jobs in the economy. However, the employment numbers they produce are different for several reasons, detailed in one of the Jobs Calculator's FAQs.

The good news is that even though there may be wide discrepancy in the change in employment reported by the two surveys in any particular month (as we saw in October 2012), any one-month divergence does not persist. In other words, the two employment series closely track each other.

This is good news for the Jobs Calculator, since a conversion needs to be made between the CPS employment implied by the target unemployment rate entered into the Jobs Calculator and the average monthly change in payroll employment (CES) needed to achieve the target unemployment rate. Since the two series closely track each other, wide deviations in month-to-month reported growth numbers will not severely affect the ability of the Jobs Calculator to make longer-term projections (within the limits of the other assumptions of the calculator). (In fact, unanticipated changes in the labor force participation rate are much more potentially problematic in making longer-term projections than are any potential variations in the conversion rate between CPS and CES employment numbers.)

The Jobs Calculator uses the average ratio of CES/CPS employment over the previous 12 months as the default conversion factor and now allows the user to see what happens if that ratio were to be different.

The following example illustrates how innocuous that conversion factor is.

Suppose the goal is to attain a 6.5 percent unemployment rate in two years. Entering 6.5 in the unemployment rate target box and 24 in the box (for the number of months you want to take to get there) yields 164,917 as the average monthly change in payroll employment needed to achieve that goal (holding everything else constant).

Next, go down to the new line showing, "Average monthly CES/CPS employment ratio." You'll see the current default value for the ratio is 0.940. Click on the chart box on the far right of that line. You'll see that since 1980, that ratio has ranged from a low of just under 0.900 in about 1984 to a high of 0.969 just before 2000. Close the box.

Now, enter the low ratio number of 0.900 in the employment ratio box. At that low ratio, only 157,899 payroll jobs are needed to achieve your 6.5 percent unemployment rate in two years.

Next, enter the high ratio number of 0.969 in the employment ratio box. At that high ratio, 170,005 payroll jobs are needed each month to achieve your goal.

The current default CES/CPS ratio provides an estimated number of monthly payroll jobs needed to achieve your specified goal of a 6.5 percent unemployment rate in two years within a 5,000–7,000 job margin, based on the highest and lowest ratio values since 1980.

The bottom line? When it comes to factors that can derail a longer-term projection of the number of jobs needed to attain a specific unemployment rate in a given period of time, the degree to which household employment estimates deviate from payroll employment estimates is just not that important, nor are monthly discrepancies in these series’ reported growth, since the discrepancies aren’t absorbed into the trends. And there you have the reason we added this flexibility to the Jobs Calculator: to allow users to see this for themselves.

Photo of Julie HotchkissBy Julie Hotchkiss, research economist and policy adviser in the research department of the Atlanta Fed

May 3, 2013 in Employment | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Building a Better Jobs Calculator: Choose Your Own Payroll/Household Employment Ratio:


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search

Recent Posts

December 2014

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      



Powered by TypePad