The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

September 07, 2017

What Is the "Right" Policy Rate?

What is the right monetary policy rate? The Cleveland Fed, via Michael Derby in the Wall Street Journal, provides one answer—or rather, one set of answers:

The various flavors of monetary policy rules now out there offer formulas that suggest an ideal setting for policy based on economic variables. The best known of these is the Taylor Rule, named for Stanford University's John Taylor, its author. Economists have produced numerous variations on the Taylor Rule that don't always offer a similar story...

There is no agreement in the research literature on a single "best" rule, and different rules can sometimes generate very different values for the federal funds rate, both for the present and for the future, the Cleveland Fed said. Looking across multiple economic forecasts helps to capture some of the uncertainty surrounding the economic outlook and, by extension, monetary policy prospects.

Agreed, and this is the philosophy behind both the Cleveland Fed's calculations based on Seven Simple Monetary Policy Rules and our own Taylor Rule Utility. These two tools complement one another nicely: Cleveland's version emphasizes forecasts for the federal funds rate over different rules and Atlanta's utility focuses on the current setting of the rate over a (different, but overlapping) set of rules for a variety of the key variables that appear in the Taylor Rule (namely, the resource gap, the inflation gap, and the "neutral" policy rate). We update the Taylor Rule Utility twice a month after Consumer Price Index and Personal Income and Outlays reports and use a variety of survey- and model-based nowcasts to fill in yet-to-be released source data for the latest quarter.

We're introducing an enhancement to our Taylor Rule utility page, a "heatmap" that allows the construction of a color-coded view of Taylor Rule prescriptions (relative to a selected benchmark) for five different measures of the resource gap and five different measures of the neutral policy rate. We find the heatmap is a useful way to quickly compare the actual fed funds rate with current prescriptions for the rate from a relatively large number of rules.

In constructing the heatmap, users have options on measuring the inflation gap and setting the value of the "smoothing parameter" in the policy rule, as well establishing the weight placed on the resource gap and the benchmark against which the policy rule is compared. (The inflation gap is the difference between actual inflation and the Federal Open Market Committee's 2 percent longer-term objective. The smoothing parameter is the degree to which the rule is inertial, meaning that it puts weight on maintaining the fed funds rate at its previous value.)

For example, assume we (a) measure inflation using the four-quarter change in the core personal consumption expenditures price index; (b) put a weight of 1 on the resource gap (that is, specify the rule so that a percentage point change in the resource gap implies a 1 percentage point change in the rule's prescribed rate); and (c) specify that the policy rule is not inertial (that is, it places no weight on last period's policy rate). Below is the heatmap corresponding to this policy rule specification, comparing the rules prescription to the current midpoint of the fed funds rate target range:

We should note that all of the terms in the heatmap are described in detail in the "Overview of Data" and "Detailed Description of Data" tabs on the Taylor Rule Utility page. In short, U-3 (the standard unemployment rate) and U-6 are measures of labor underutilization defined here. We introduced ZPOP, the utilization-to-population ratio, in this macroblog post. "Emp-Pop" is the employment-population ratio. The natural (real) interest rate is denoted by r*. The abbreviations for the last three row labels denote estimates of r* from Kathryn Holston, Thomas Laubach, and John C. Williams, Thomas Laubach and John C. Williams, and Thomas Lubik and Christian Matthes.

The color coding (described on the webpage) should be somewhat intuitive. Shades of red mean the midpoint of the current policy rate range is at least 25 basis points above the rule prescription, shades of green mean that the midpoint is more than 25 basis points below the prescription, and shades of white mean the midpoint is within 25 basis points of the rule.

The heatmap above has "variations on the Taylor Rule that don't always offer a similar story" because the colors range from a shade of red to shades of green. But certain themes do emerge. If, for example, you believe that the neutral real rate of interest is quite low (the Laubach-Williams and Lubik-Mathes estimates in the bottom two rows are −0.22 and −0.06) your belief about the magnitude of the resource gap would be critical to determining whether this particular rule suggests that the policy rate is already too high, has a bit more room to increase, or is just about right. On the other hand, if you are an adherent of the original Taylor Rule and its assumption that a long-run neutral rate of 2 percent (the top row of the chart) is the right way to think about policy, there isn't much ambiguity to the conclusion that the current rate is well below what the rule indicates.

"[D]ifferent rules can sometimes generate very different values for the federal funds rate, both for the present and for the future." Indeed.

September 7, 2017 in Business Cycles, Data Releases, Economics, Monetary Policy | Permalink


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 11, 2017

Another Look at the Wage Growth Tracker's Cyclicality

Though Friday's employment report showed that payroll employment rose by a robust 222,000 jobs in June—much higher than most forecasts—enthusiasm for the news was tempered somewhat by average hourly wages coming in below expectations. Is the (ongoing) relatively tepid pace of wage growth a cause for concern? Perhaps, but the ups and downs of average wages over the course of the business cycle—the pattern of expansion-recession-expansion that typifies modern economies—are a bit more complicated than they may seem.

The year-over-the-year growth in the average wage level that we see in the official employment conditions report is influenced by wages paid to people who were employed either today or a year earlier. That is, the wages of those who remained employed (EE) as well as those who entered employment (NE) and those who exited employment (EN). Because the individuals in these groups may command different wages on average—due to experience, for example—the usual wage growth measures confound the effects of changes in the average wage of people with particular types of year-over-year employment histories. In that sense, the usual wage growth statistic may not exactly be comparing apples to apples.

Research by, for example, Solon, Barsky, and Parker 1992 and Daly and Hobjin 2016  explores the effect of the changing composition of workers over time using microdata on individuals with known employment histories. They show that people who enter and exit employment have a lower average wage than those who stay employed over the year and that the net exit/entry flow increases when the labor market is weak—more people leave employment, and fewer people enter it. As a result, the disproportionate increase in the net flow of workers with a lower-than-average wage serves to boost the overall average wage level during recessions.

One approach to making a more apples-to-apples comparison of average wages over time is to strip out the effect that comes from the change in the share of workers who stay employed and who entered or exited employment. Technically speaking, the composition-adjusted wage growth series is determined by adding the change in average log hourly wage within the EE group and the same change within the EN/NE group, while holding constant the respective average population shares in each group. The chart below illustrates the result of this adjustment.

I should note that the change in the average wage uses data only for people who have a known employment status a year earlier, which results in a wage growth series that is somewhat higher than the change in the average wage of all employed people, some of whom have an unknown employment history.

As the chart shows, relative to the adjusted series (the green line), growth in overall average wages (the orange line) stayed up longer during the last recession, then fell by less, and was slower to adjust to improving labor market conditions (falling unemployment) after the recession ended. The correlation between the overall growth in average wages and the inverse of the unemployment rate is 0.72, and this correlation rises to 0.79 using the adjusted wage growth series.

An alternative approach to making a more apples-to-apples comparison of average wages is to ignore the entry/exit margin and only look at people who are employed both today and a year earlier (EE). The Wage Growth Tracker (computed here as the difference in average log hourly wage) does that for the subset of EE people who have an actual wage record in both periods (no earnings information is collected for self-employed workers in the Current Population Survey). The following chart compares this version of the Wage Growth Tracker with the growth in overall average wages.

The Atlanta Fed's Wage Growth Tracker uses the median change in wages rather than the average change, but it displays very similar dynamics.

As the chart shows, the growth in average wages for those who remain in wage and salary jobs (the red line) is a bit smoother than growth in overall average wages (the orange line) and moves more in sync with the inverse of the unemployment rate (the correlation is 0.85). However, its level is quite a bit higher than growth in overall average wages. This disparity is because the average wage for those entering employment is less than for those exiting, so the change in average wages along the entry/exit margin is always negative.

But enough math—let's put this all together. If you want a measure of wage growth that reflects relative labor market strength, then looking at wage growth after controlling for entry/exit composition effects is probably a good idea. The Wage Growth Tracker seems to do that job reasonably well. However, the Wage Growth Tracker almost certainly overstates the growth in per hour wage costs that employers are facing. Most importantly, it ignores the employment exit/entry margin. Hence, one should avoid interpreting the Wage Growth Tracker as a direct measure of growth in labor costs—a point also discussed in this recent Atlanta Fed podcast episode . The next reading from the Wage Growth Tracker will be available when the Census Bureau releases the Current Population Survey microdata, usually within a couple of weeks of the national employment report. Given that the unemployment rate has remained relatively low recently, I would expect the Wage Growth Tracker to stay at a relatively high level. Check back here then and we'll see what we learn.

July 11, 2017 in Data Releases, Employment, Labor Markets, Wage Growth | Permalink


Great post! Looking forward to your discussion on Wages, Earnings, & Compensation at the NABE Economic Measurement Seminar next week!

Posted by: Chris Herdelin | July 11, 2017 at 02:30 PM

Very insightful research. Have you analyze the wage of NE and EN? I am curious if those exit the employment (for age 55+) are having higher wages than those maintain employment in that age group as they might earn enough to retire. If you have data in that direction, would you mind sharing with us?

Thanks much.

Posted by: Longying Zhao | July 12, 2017 at 12:04 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 05, 2016

Introducing the Refined Labor Market Spider Chart

In January 2013, Atlanta Fed research director Dave Altig introduced the Atlanta Fed's labor market spider chart in a macroblog post.

In a follow-up post that June, Atlanta Fed colleague Melinda Pitts and I introduced a dedicated page for the spider chart located at the Center for Human Capital Studies (CHCS) webpage. It shows the distribution of 13 labor market indicators relative to their readings just before the 2007–09 recession (December 2007) and the trough of the labor market following that recession (December 2009). The substantial improvement in the labor market during the past three years is quite evident in the spider chart below.

As of December 2012, none of the indicators had yet reached their prerecession levels, and some had a long way to go. Now, many of these indicators are near their prerecession values—and some have blown by them.

To make the spider chart more relevant in an environment with considerably less labor market slack than three years ago, we are introducing a modified version, which you can see here. Below is an example of a chart I created using the menu-bars on the spider chart's web page:

In this chart, I plot the May 2004 and November 2015 percentile ranks of labor market indicators relative to their distributions since March 1994. As with the previous spider chart, indicators such as the unemployment rate, where larger values indicate more labor market slack, have been multiplied by –1. The innermost and outermost rings represent the minimum and maximum values of the variables from March 1994 to January 2016. The three dashed gray rings in between are the 25th, 50th, and 75th percentiles of the distributions. For example, the November 2015 value of 12-month average hourly earnings growth (2.26 percent) is the 23rd percentile of its distribution. This means that 23 percent of the other monthly observations on hourly earnings growth since March 1994 are lower than it is.

I chose May 2004 and November 2015 because they had the last employment situation reports before "liftoffs" of the federal funds rate. November 2015 appears to be stronger than May 2004 for some indicators (job openings, unemployment rate, and initial claims) and weaker for others (hires rate, work part-time for economic reasons, and the 12-month growth rate of the Employment Cost Index).

The average percentile ranks of the variables for these two months are similar, as the chart below depicts:

Also shown in the chart is the Kansas City Fed's Level of Activity Labor Market Conditions Indicator. It is a sum of 24 not equally weighted labor market indicators, standardized over the period from 1992 to the present. In spite of its methodological and source-data differences with the average percentile rank measure plotted above, it tracks quite closely, especially since 2004. However, as shown in the spider chart that I referred to above, there is quite a bit of variation within the indicators that may provide additional information to our analysis of the average trends.

We made a number of other changes to the spider chart to ensure it reflects current labor market issues. These changes are documented in the FAQs and "Indicators" sections of the new spider chart page. Of particular note, users can choose not only the years for which they wish to track information, but also the period of reference that provides the basis of the spider chart. The payroll employment variable is now the three-month average change rather than a level. Temporary help services employment has been dropped, and two measures of 12-month compensation growth and the employment-population ratio (EPOP) for "prime-age workers" (25 to 54 years) have been added.

Some care should be taken when comparing recent labor market data values with those 10 or more years ago as structural changes in the labor market might imply that a "normal" value today is different than a "normal" value in, say, 2004. The variable choices for the refined spider chart were made to mitigate this problem to some extent. For example, we use the prime-age EPOP as a crude adjustment for population aging, putting downward pressure on the labor force participation rate and EPOP over the past 10 years (roughly 2 percentage points). This doesn't entirely resolve the comparability issue since, within the prime-age population, the self-reporting rate of illness or disability as a reason for not wanting a job has increased about 1.5 percentage points since 1998 (see the macroblog posts here and here and the CHCS Labor Force Participation Dynamics webpage). If this increase in disability reporting is partly structural—and a Brookings study by Fed economist Stephanie Aaronson and others concludes it is—some of the decline in the prime-age EPOP since the late 1990s may not be a result of a weaker labor market per se.

Other variables in the spider chart may have had structural changes as well. For example, a study by San Francisco Fed economists Rob Valleta and Catherine van der List concludes that structural factors explain just under half of the rise in the share of workers employed part-time for economic reasons over the 2006 to 2013 period.

To partially account for structural changes in trends, we allow the user to select one of 11 time periods over which the distributions are calculated. The default period is March 1994 to present, which is what was used in the example above, but users can choose a window as short as five years where, presumably, structural changes are less important. A trade-off with using a short window is that a "normal" value may not produce a result close to the median. For example, the median unemployment rate is 5.6 percent since March 1994 and 7.3 percent since February 2011. The latter value is much farther away from the most recent estimates of the natural rate of unemployment from the Congressional Budget Office and the Survey of Professional Forecasters (both 5.0 percent).

In our June 2013 macroblog post introducing the spider chart, we wrote that we would reevaluate our tools and determine a more appropriate way to monitor the labor market when "the labor market has turned a corner into expansion." The new spider chart is our response to the stronger labor market. We hope users find the tool useful.

February 5, 2016 in Data Releases, Economic conditions, Employment, Labor Markets | Permalink


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

November 24, 2014

And the Winner Is...Full-Time Jobs!

Each month, the U.S. Census Bureau for the U.S. Bureau of Labor Statistics (BLS) surveys about 60,000 households and asks people 15 years and older whether they are employed and, if so, if they are working full-time or part-time. The BLS defines full-time employment as working at least 35 hours per week. This survey, referred to as both the Current Population Survey and the Household Survey, is what produces the monthly unemployment rate, labor force participation rate, and other statistics related to activities and characteristics of the U.S. population.

For many months after the official end of the Great Recession in June 2009, the Household Survey produced less-than-happy news about the labor market. The unemployment rate didn't start to decline until October 2009, and nonfarm payroll job growth didn't emerge confidently from negative territory until October 2010. Now that the unemployment rate has fallen to 5.8 percent—much faster than most would have expected even a year ago—the attention has turned to the quality, rather than quantity, of jobs. This scrutiny is driven by a stubbornly high rate of people employed part-time "for economic reasons" (PTER). These are folks who are working part-time but would like a full-time job. Several of my colleagues here at the Atlanta Fed have looked at this phenomenon from many angles (here, here, here, here, and here).

The elevated share of PTER has left some to conclude that, yes, the economy is creating a significant number of jobs (an average of more than 228,000 nonfarm payroll jobs each month in 2014), but these are low-quality, part-time jobs. Several headlines have popped up over the past year or so claiming that "...most new jobs have been part-time since Obamacare became law," "Most 2013 job growth is in part-time work," "75 Percent Of Jobs Created This Year [2013] Were Part-Time," "Part-time jobs account for 97% of 2013 job growth," and as recently as July of this year, "...Jobs Report Is Great for Part-time Workers, Not So Much for Full-Time."

However, a more careful look at the postrecession data illustrates that since October 2010, with the exception of four months (November 2010 and May–July 2011), the growth in the number of people employed full-time has dominated growth in the number of people employed part-time. Of the additional 8.2 million people employed since October 2010, 7.8 million (95 percent) are employed full-time (see the charts).

The pair of charts illustrates the contribution of the growth in part-time and full-time jobs to the year-over-year change in total employment between January 2000 and October 2014. By zooming in, we can see the same thing from October 2010 (when payroll job growth entered consistently positive territory) to October 2014. Job growth from one month to the next, even using seasonally adjusted data, is very volatile.

To get a better idea of the underlying stable trends in the data, it is useful to compare outcomes in the same month from one year to the next, which is the comparison that the charts make. The black line depicts the change in the number of people employed each month compared to the number employed in the same month the previous year. The green bars show the change in the number of full-time employed, and the purple bars show the change in the number of part-time employed.

During the Great Recession (until about October 2010), the growth in part-time employment clearly exceeded growth in full-time employment, which was deep in negative territory. The current high level of PTER employment is likely to reflect this extended period of time in which growth in part-time employment exceeded that of full-time employment. But in every month since August 2011, the increase in the number of full-time employed from the year before has far exceeded the increase in the number of part-time employed. This phenomenon includes all of the months of 2013, in spite of what some of the headlines above would have you believe.

So, in the post-Great Recession era, the growth in full-employment is, without a doubt, way out ahead.

Author's note: The data used in this post, which are the same data used to generate the headlines linked above, reflect either full-time or part-time employment (total hours of work at least or less than 35 per week, respectively). They do not necessarily reflect employment in a single job.

November 24, 2014 in Data Releases, Economic conditions, Employment, Labor Markets | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference And the Winner Is...Full-Time Jobs!:


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 21, 2014

GDP Growth: Will We Find a Higher Gear?

We are still more than a week away from receiving the advance report for U.S. gross domestic product (GDP) from April through June. Based on what we know to date, second-quarter growth will be a large improvement over the dismal performance seen during the first three months of this year. As of today, our GDPNow model is reading an annualized second-quarter growth rate at 2.7 percent. Given that the economy declined by 2.9 percent in the first quarter, the prospects for the anticipated near-3 percent growth for 2014 as a whole look pretty dim.

The first-quarter performance was dominated, of course, by unusual circumstances that we don't expect to repeat: bad weather, a large inventory adjustment, a decline in real exports, and (especially) an unexpected decline in health services expenditures. Though those factors may mean a disappointing growth performance for the year as a whole, we will likely be willing to write the first quarter off as just one of those things if we can maintain the hoped-for 3 percent pace for the balance of the year.

Do the data support a case for optimism? We have been tracking the six-month trends in four key series that we believe to be especially important for assessing the underlying momentum in the economy: consumer spending (real personal consumption expenditures, or real PCE) excluding medical services, payroll employment, manufacturing production, and real nondefense capital goods shipments excluding aircraft.

The following charts give some sense of how things are stacking up. We will save the details for those who are interested, but the idea is to place the recent performance of each series, given its average growth rate and variability since 1990, in the context of GDP growth and its variability over that same period.





What do we learn from the foregoing charts? Three out of four of these series appear to be consistent with an underlying growth rate in the range of 3 percent. Payroll employment growth, in fact, is beginning to send signals of an even stronger pace.

Unfortunately, the series that looks the weakest relates to consumer spending. If we put any stock in some pretty basic economic theory, spending by households is likely the most forward-looking of the four measures charted above. That, to us, means a cautious attitude is the still the appropriate one. Or, to quote from a higher Atlanta Fed power:

... it will likely be hard to confirm a shift to a persistent above-trend pace of GDP growth even if the second-quarter numbers look relatively good.

This experience suggests to me that we can misread the vital signs of the economy in real time. Notwithstanding the mostly positive and encouraging character of recent data, we policymakers need to be circumspect when tempted to drop the gavel and declare the case closed. In the current situation, I feel it's advisable to accrue evidence and gain perspective. It will take some time to validate an outlook that assumes above-trend growth and associated solid gains in employment and price stability.

Photo of Dave AltigBy Dave Altig, executive vice president and research director, and


Photo of Pat HigginsPat Higgins, a senior economist, both in the Atlanta Fed's research department


July 21, 2014 in Data Releases, Economic Growth and Development, Forecasts, GDP | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference GDP Growth: Will We Find a Higher Gear?:


Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 18, 2014

Part-Time for Economic Reasons: A Cross-Industry Comparison

With employment trends having turned solidly positive in recent months, attention has focused on the quality of the jobs created. See, for example, the different perspectives of Mortimer Zuckerman in the Wall Street Journal and Derek Thompson in the Atlantic. Zuckerman highlights the persistently elevated level of part-time employment—a legacy of the cutbacks firms made during the recession—whereas Thompson points out that most employment growth on net since the end of the recession has come in the form of full-time jobs.

In measuring labor market slack, the part-time issue boils down to how much of the elevated level of part-time employment represents underutilized labor resources. The U-6 measure of unemployment, produced by the U.S. Bureau of Labor Statistics, counts as unemployed people who say they want to and are able to work a full-time schedule but are working part-time because of slack work or business conditions, or because they could find only part-time work. These individuals are usually referred to as working part-time for economic reasons (PTER). Other part-time workers are classified as working part-time for non-economic reasons (PTNER). Policymakers have been talking a lot about U-6 recently. See for example, here and here.

The "lollipop" chart below sheds some light on the diversity of the share of employment that is PTER and PTNER across industries. The "lolly" end of the lollipop denotes the average mix of employment that is PTER and PTNER in 2013 within each industry, and the size of the lolly represents the size of the industry. The bottom of the "stem" of each lollipop is the average PTER/PTNER mix in 2007. The red square lollipop is the percent of all employment that is PTER and PTNER for the United States as a whole. (Note that the industry classification is based on the worker's main job. Part-time is defined as less than 35 hours a week.)

The primary takeaways from the chart are:

  1. The percent of the workforce that is part time varies greatly across industries (compare for example, durable goods manufacturing with restaurants).
  2. All industries have a greater share of PTNER workers than PTER workers (for example, the restaurant industry in 2013 had 32 percent of workers who said they were PTNER and about 13 percent who declared themselves as PTER).
  3. All industries had a greater share of PTER workers in 2013 than in 2007 (all the lollipops point upwards).
  4. Most industries have a lower share of PTNER workers than in the past (most of the lollipops lean to the left).
  5. Most industries have a greater share of part-time workers (PTER + PTNER) than in the past (the increase in PTER exceeds the decline in PTNER for most industries).

Another fact that is a bit harder to see from this chart is that in 2007, industries with the largest part-time workforces did not necessarily have the largest PTER workforces. In 2013, it was more common for a large part-time workforce to be associated with a large PTER workforce. In other words, the growth in part-time worker utilization in industries such as restaurants and some segments of retail has bought with it more people who are working part-time involuntarily.

So the increase in PTER since 2007 is widespread. But is that a secular trend? If it is, then the increase in the PTER share would be evident since the recession as well. The next lollipop chart presents evidence by comparing 2013 with 2012:

This chart shows a recent general improvement. In fact, 25 of the 36 industries pictured in the chart above have experienced a decline in the share of PTER, and 21 of the 36 have a smaller portion working part-time in total. Exceptions are concentrated in retail, an industry that represents a large share of employment. In total, 20 percent of people are employed in industries that experienced an increase in PTER from 2012 to 2013. So while overall there has been a fairly widespread (but modest) recent improvement in the situation, the percent of the workforce working part-time for economic reasons remains elevated compared with 2007 for all industries. Further, many people are employed in industries that are still experiencing gains in the share that is PTER.

Why has the PTER share continued to increase for some industries? Are people who normally work full-time jobs still grasping those part-time retail jobs until something else becomes available, has there been a shift in the use of part-time workers in those industries, or is there a greater demand for full-time jobs than before the recession? We'll keep digging.

Photo of John RobertsonBy John Robertson, a vice president and senior economist, and


Photo of Ellyn TerryEllyn Terry, a senior economic analyst, both of the Atlanta Fed's research department


July 18, 2014 in Data Releases, Employment, Labor Markets, Unemployment | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Part-Time for Economic Reasons: A Cross-Industry Comparison:


I think one of your axes on the lollipop charts are mislabeled.

They both read % employed that are PTNER; shouldn't one of them read % employed that are PTER?


Posted by: Heidi | July 18, 2014 at 05:11 PM

We noticed that, and we made that fix shortly after the chart's initial posting. We appreciate your close reading of the data!

Posted by: macroblog | July 21, 2014 at 03:45 PM

Very helpful and interesting article. I find it curious that the 2014 arrival of the Affordable Care Act didn't at least merit a mention as one possible reason why PTER remained elevated in 2013.

Posted by: Tschurin | June 11, 2015 at 12:22 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 10, 2014

Introducing the Atlanta Fed's GDPNow Forecasting Model

The June 18 statement from the Federal Open Market Committee opened with this (emphasis mine):

Information received since the Federal Open Market Committee met in April indicates that growth in economic activity has rebounded in recent months.... Household spending appears to be rising moderately and business fixed investment resumed its advance, while the recovery in the housing sector remained slow. Fiscal policy is restraining economic growth, although the extent of restraint is diminishing.

I highlighted the business fixed investment (BFI) part of that passage because it contracted at an annual rate of 1.2 percent in the first quarter of 2014. Any substantial turnaround in growth in gross domestic product (GDP) from its dismal first-quarter pace would seem to require that BFI did in fact resume its advance through the second quarter.

We won't get an official read on BFI—or on real GDP growth and all of its other components—until July 30, when the U.S. Bureau of Economic Analysis (BEA) releases its advance (or first) GDP estimates for the second quarter of 2014. But that doesn't mean we are completely in the dark on what is happening in real time. We have enough data in hand to make an informed statistical guess on what that July 30 number might tell us.

The BEA's data-construction machinery for estimating GDP is laid out in considerable detail in its NIPA Handbook. Roughly 70 percent of the advance GDP release is based on source data from government agencies and other data providers that are available prior to the BEA official release. This information provides the basis for what have become known as "nowcasts" of GDP and its major subcomponents—essentially, real-time forecasts of the official numbers the BEA is likely to deliver.

Many nowcast variants are available to the public: the Wall Street Journal Economic Forecasting Survey, the Philadelphia Fed Survey of Professional Forecasters, and the CNBC Rapid Update, for example. In addition, a variety of proprietary nowcasts are available to subscribers, including Aspen Publishers' Blue Chip Publications, Macroeconomic Advisers GDP Tracking, and Moody's Analytics high-frequency model.

With this macroblog post, we introduce the Federal Reserve Bank of Atlanta's own nowcasting model, which we call GDPNow.

GDPNow will provide nowcasts of GDP and its subcomponents on a regularly updated basis. These nowcasts will be available on the pages of the Atlanta Fed's Center for Quantitative Economic Research (CQER).

A few important notes about GDPNow:

  • The GDPNow model forecasts are nonjudgmental, meaning that the forecasts are taken directly from the underlying statistical model. (These are not official forecasts of either the Atlanta Fed or its president, Dennis Lockhart.)
  • Because nowcasts are often based on both modeling and judgment, there is no reason to expect that GDPNow will agree with alternative forecasts. And we do not intend to present GDPNow as superior to those alternatives. Different approaches have their pluses and minuses. An advantage of our approach is that, because it is nonjudgmental, our methodology is easily replicable. But it is always wise to avoid reliance on a single model or source of information.
  • GDPNow forecasts are subject to error, sometimes substantial. Internally, we've regularly produced nowcasts from the GDPNow model since introducing an earlier version of it in an October 2011 macroblog post. A real-time track record for the model nowcasts just before the BEA's advance GDP release is available on the CQER GDPNow webpage, and will be updated on a regular basis to help users make informed decisions about the use of this tool.

So, with that in hand, does it appear that BFI in fact "resumed its advance" last quarter? The table below shows the current GDPNow forecasts:

We will update the nowcast five to six times each month following the releases of certain key economic indicators listed in the frequently asked questions. Look for the next GDPNow update on July 15, with the release of the retail trade and business inventory reports.

If you want to dig deeper, the GDPNow page includes downloadable charts and tables as well as numerical details including the model's nowcasts for GDP, its subcomponents, and how the subcomponent nowcasts are built up from both the underlying source data and the model parameters. This working paper supplies the model's technical documentation. We hope economy watchers find GDPNow to be a useful addition to their information sets.

Photo of Pat HigginsBy Pat Higgins, a senior economist in the Atlanta Fed's research department

July 10, 2014 in Data Releases, Economic Growth and Development, Forecasts, GDP | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Introducing the Atlanta Fed's GDPNow Forecasting Model:


Is there a link or RSS find one could follow to see each update of the GDPnow data?

Posted by: Bryan Willman | July 12, 2014 at 01:28 PM

Thanks for your question about GDPNow. To receive updates, please sign up for our e-mail notifications and select the Center for Quantitative Research. The e-mail subscription page link is at www.frbatlanta.org/webscriber/user/dsp_login.cfm. If you have any additional questions, please contact us at pubs@frbatlanta.org.

For more information about GDPNow, visit www.frbatlanta.org/cqer/researchcq/gdpnow.cfm.

Posted by: macroblog | July 15, 2014 at 12:25 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 26, 2014

Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 3)

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

This is the last of three posts on that talk. The first post reviewed alternative inflation measures; the second looked at ways to work with the Consumer Price Index to get a clear view of inflation. The full text of the speech is available on the Atlanta Fed's events web page.

The challenge of communicating price stability

Let me close this blog series with a few observations on the criticism that measures of core inflation, and specifically the CPI excluding food and energy, disconnect the Federal Reserve from households and businesses "who know price changes when they see them." After all, don't the members of the Federal Open Market Committee (FOMC) eat food and use gas in their cars? Of course they do, and if it is the cost of living the central bank intends to control, the prices of these goods should necessarily be part of the conversation, notwithstanding their observed volatility.

In fact, in the popularly reported all-items CPI, the Bureau of Labor Statistics has already removed about 40 percent of the monthly volatility in the cost-of-living measure through its seasonal adjustment procedures. I think communicating in terms of a seasonally adjusted price index makes a lot of sense, even if nobody actually buys things at seasonally adjusted prices.

Referencing alternative measures of inflation presents some communications challenges for the central bank to be sure. It certainly would be easier if progress toward either of the Federal Reserve's mandates could be described in terms of a single, easily understood statistic. But I don't think this is feasible for price stability, or for full employment.

And with regard to our price stability mandate, I suspect the problem of public communication runs deeper than the particular statistics we cite. In 1996, Robert Shiller polled people—real people, not economists—about their perceptions of inflation. What he found was a stark difference between how economists think about the word "inflation" and how folks outside a relatively small band of academics and policymakers define inflation. Consider this question:


And here is how people responded:


Seventy-seven percent of the households in Shiller's poll picked number 2—"Inflation hurts my real buying power"—as their biggest gripe about inflation. This is a cost-of-living description. It isn't the same concept that most economists are thinking about when they consider inflation. Only 12 percent of the economists Shiller polled indicated that inflation hurt real buying power.

I wonder if, in the minds of most people, the Federal Reserve's price-stability mandate is heard as a promise to prevent things from becoming more expensive, and especially the staples of life like, well, food and gasoline. This is not what the central bank is promising to do.

What is the Federal Reserve promising to do? To the best of my knowledge, the first "workable" definition of price stability by the Federal Reserve was Paul Volcker's 1983 description that it was a condition where "decision-making should be able to proceed on the basis that 'real' and 'nominal' values are substantially the same over the planning horizon—and that planning horizons should be suitably long."

Thirty years later, the Fed gave price stability a more explicit definition when it laid down a numerical target. The FOMC describes that target thusly:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee reaffirms its judgment that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve's statutory mandate.

Whether one goes back to the qualitative description of Volcker or the quantitative description in the FOMC's recent statement of principles, the thrust of the price-stability objective is broadly the same. The central bank is intent on managing the persistent, nominal trend in the price level that is determined by monetary policy. It is not intent on managing the short-run, real fluctuations that reflect changes in the cost of living.

Effectively achieving price stability in the sense of the FOMC's declaration requires that the central bank hears what it needs to from the public, and that the public in turn hears what they need to know from the central bank. And this isn't likely unless the central bank and the public engage in a dialog in a language that both can understand.

Prices are volatile, and the cost of living the public experiences ought to reflect that. But what the central bank can control over time—inflation—is obscured within these fluctuations. What my colleagues and I have attempted to do is to rearrange the price data at our disposal, and so reveal a richer perspective on the inflation experience.

We are trying to take the torture out of the inflation discussion by accurately measuring the things that the Fed needs to worry about and by seeking greater clarity in our communications about what those things mean and where we are headed. Hard conversations indeed, but necessary ones.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department


June 26, 2014 in Business Cycles, Data Releases, Inflation | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 3):


It would seem the non-economists may also be saying that the economists low inflation is their own stagnant wage.

Sure, they may see prices rising, but they stated what they suffer is the reduction of purchasing power.

Perhaps they would be happy to see prices rising rapidly as long as their own wages outpace.

The 70s may not have been so bad for them.

Posted by: cfaman | June 27, 2014 at 10:01 AM

In addition to the issues discussed in the article, Fed policy makers typically ignore one-time prices changes, particularly those originating on the supply side of the economy -- e.g., those caused by bad weather or a foreign conflict. 

The public can't ignore those price changes, which comprise their daily reality.

Posted by: Thomas Wyrick | July 06, 2014 at 05:57 PM

Tried to contact u in Cleveland late summer 2008. Had a simple? w t f is happening. I saw your picture on frb website next day your picture disappeared I called frb Cleveland some girl may be an economist said you don't work there anymore that was all the information she had. I thought you quit because Greenspan discussed you!!!! Hope all is well Henry

Posted by: Henry Feldman | June 27, 2017 at 02:16 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 24, 2014

Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 2)

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

This is the second of three posts based on that talk. Yesterday's post considered the median CPI and other trimmed-mean measures.

Is it more expensive, or does it just cost more money? Inflation versus the cost of living

Let me make two claims that I believe are, separately, uncontroversial among economists. Jointly, however, I think they create an incongruity for how we think about and measure inflation.

The first claim is that over time, inflation is a monetary phenomenon. It is caused by too much money chasing a limited number of things to buy with that money. As such, the control of inflation is rightfully the responsibility of the institution that has monopoly control over the supply of money—the central bank.

My second claim is that the cost of living is a real concept, and changes in the cost of living will occur even in a world without money. It is a description of how difficult it is to buy a particular level of well-being. Indeed, to a first approximation, changes in the cost of living are beyond the ability of a central bank to control.

For this reason, I think it is entirely appropriate to think about whether the cost of living in New York City is rising faster or slower than in Cleveland, just as it is appropriate to ask whether the cost of living of retirees is rising faster or slower than it is for working-aged people. The folks at the Bureau of Labor Statistics produce statistics that can help us answer these and many other questions related to how expensive it is to buy the happiness embodied in any particular bundle of goods.

But I think it is inappropriate for us to think about inflation, the object of central bank control, as being different in New York than it is in Cleveland, or to think that inflation is somehow different for older citizens than it is for younger citizens. Inflation is common to all things valued by money. Yet changes in the cost of living and inflation are commonly talked about as if they are the same thing. And this creates both a communication and a measurement problem for the Federal Reserve and other central banks around the world.

Here is the essence of the problem as I see it: money is not only our medium of exchange but also our numeraire—our yardstick for measuring value. Embedded in every price change, then, are two forces. The first is real in the sense that the good is changing its price in relation to all the other prices in the market basket. It is the cost adjustment that motivates you to buy more or less of that good. The second force is purely nominal. It is a change in the numeraire caused by an imbalance in the supply and demand of the money being provided by the central bank. I think the concept of "core inflation" is all about trying to measure changes in this numeraire. But to get there, we need to first let go of any "real" notion of our price statistics. Let me explain.

As a cost-of-living approximation, the weights the Bureau of Labor Statistics (BLS) uses to construct the Consumer Price Index (CPI) are based on some broadly representative consumer expenditures. It is easy to understand that since medical care costs are more important to the typical household budget than, say, haircuts, these costs should get a greater weight in the computation of an individual's cost of living. But does inflation somehow affect medical care prices differently than haircuts? I'm open to the possibility that the answer to this question is yes. It seems to me that if monetary policy has predictable, real effects on the economy, then there will be a policy-induced disturbance in relative prices that temporarily alters the cost of living in some way.

But if inflation is a nominal experience that is independent of the cost of living, then the inflation component of medical care is the same as that in haircuts. No good or service, geographic region, or individual experiences inflation any differently than any other. Inflation is a common signal that ultimately runs through all wages and prices.

And when we open up to the idea that inflation is a nominal, not-real concept, we begin to think about the BLS's market basket in a fundamentally different way than what the BLS intends to measure.

This, I think, is the common theme that runs through all measures of "core" inflation. Can the prices the BLS collects be reorganized or reweighted in a way that makes the aggregate price statistic more informative about the inflation that the central bank hopes to control? I think the answer is yes. The CPI excluding food and energy is one very crude way. Food and energy prices are extremely volatile and certainly point to nonmonetary forces as their primary drivers.

In the early 1980s, Otto Eckstein defined core inflation as the trend growth rate of the cost of the factors of production—the cost of capital and wages. I would compare Eckstein's measure to the "inflation expectations" component that most economists (and presumably the FOMC) think "anchor" the inflation trend.

The sticky-price CPI

Brent Meyer and I have taken this idea to the CPI data. One way that prices appear to be different is in their observed "stickiness." That is, some prices tend to change frequently, while others do not. Prices that change only infrequently are likely to be more forward-looking than are those that change all the time. So we can take the CPI market basket and separate it into two groups of prices—prices that tend to be flexible and those that are "sticky" (a separation made possible by the work of Mark Bils and Peter J. Klenow).

Indeed, we find that the items in the CPI market basket that change prices frequently (about 30 percent of the CPI) are very responsive to changes in economic conditions, but do not seem to have a very forward-looking character. But the 70 percent of the market basket items that do not change prices very often—these are accounted for in the sticky-price CPI—appear to be largely immune to fluctuations in the business conditions and are better predictors of future price behavior. In other words, we think that some "inflation-expectation" component exists to varying degrees within each price. By reweighting the CPI market basket in a way that amplifies the behavior of the most forward-looking prices, the sticky-price CPI gives policymakers a perspective on the inflation experience that the headline CPI can't.

Here is what monthly changes in the sticky-price CPI look like compared to the all-items CPI and the traditional "core" CPI.

Let me describe another, more radical example of how we might think about reweighting the CPI market basket to measure inflation—a way of thinking that is very different from the expenditure-basket approach the BLS uses to measure the cost of living.

If we assume that inflation is ultimately a monetary event and, moreover, that the signal of this monetary inflation can be found in all prices, then we might use statistical techniques to help us identify that signal from a large number of price data. The famous early-20th-century economist Irving Fisher described the problem as trying to track a swarm of bees by abstracting from the individual, seemingly chaotic behavior of any particular bee.

Cecchetti and I experimented along these lines to measure a common signal running through the CPI data. The basic idea of our approach was to take the component data that the BLS supplied, make a few simple identifying assumptions, and let the data itself determine the appropriate weighting structure of the inflation estimate. The signal-extraction method we chose was a dynamic-factor index approach, and while we didn't pursue that work much further, others did, using more sophisticated and less restrictive signal-extraction methods. Perhaps most notable is the work of Ricardo Reis and Mark Watson.

To give you a flavor of the approach, consider the "first principal component" of the CPI price-change data. The first principal component of a data series is a statistical combination of the data that accounts for the largest share of their joint movement (or variance). It's a simple, statistically shared component that runs through all the price data.

This next chart shows the first principal component of the CPI price data, in relation to the headline CPI and the core CPI.

Again, this is a very different animal than what the folks at the BLS are trying to measure. In fact, the weights used to produce this particular common signal in the price data bear little similarity to the expenditure weights that make up the market baskets that most people buy. And why should they? The idea here doesn't depend on how important something is to the well-being of any individual, but rather on whether the movement in its price seems to be similar or dissimilar to the movements of all the other prices.

In the table below, I report the weights (or relative importance) of a select group of CPI components and the weights they would get on the basis of their contribution to the first principal component.


While some criticize the CPI because it over weights housing from a cost-of-living perspective, it may be these housing components that ought to be given the greatest consideration when we think about the inflation that the central bank controls. Likewise, according to this approach, restaurant costs, motor vehicle repairs, and even a few food components should be taken pretty seriously in the measurement of a common inflation signal running through the price data.

And what price movements does this approach say we ought to ignore? Well, gasoline prices for one. But movements in the prices of medical care commodities, communications equipment, and tobacco products also appear to move in ways that are largely disconnected from the common thread in prices that runs through the CPI market basket.

But this and other measures of "core" inflation are very much removed from the cost changes that people experience on a monthly basis. Does that cause a communications problem for the Federal Reserve? This will be the subject of my final post.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department


June 24, 2014 in Business Cycles, Data Releases, Inflation | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 2):


Great thoughts, thanks for sharing. taking the the idea of core inflation as the movements in prices that contain information about future inflation, have you ever thought about applying partial least squares (PLS) rather than PCA for dimension reduction, and making a future value of headline inflation the Y variable in the PLS decomposition of the Y'X? then you would get weightings that reflected the information content of each price series x on future Y, rather than PCA which simply decomposes the variance within X'X

Posted by: Michael Hugman | June 25, 2014 at 11:10 AM

This is very interesting. But I wonder, is it really possible to distinguish monetary inflation from cost-of-living inflation? As you say, monetary inflation reflects an imbalance between the supply and demand for money. Where does the demand for money come from? Presumably from the level of real activity. And how do we measure real activity independent of money, if not as a level of well-being?

In fact, the measurement of quantity in terms of well-being is the explicit basis of the hedonic price adjustments that go into a significant fraction of the CPI. So at the least, if you want a pure monetary measure of inflation, shouldn't you strip those adjustments back out?

Along the same lines, you say the inflation controlled by the central should be identical in New York and Cleveland. But what if monetary policy produces identical rates of money supply growth in both cities, while different real growth rates mean that money demand is rowing faster in one place than the other?

Posted by: JW Mason | June 27, 2014 at 09:42 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 23, 2014

Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 1)

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

In this, and the following two blogs, I'll be posting a modestly edited version of that talk. A full version of my prepared remarks will be posted along with the third installment of these posts.

The ideas expressed in these blogs and the related speech are my own, and do not necessarily reflect the views of the Federal Reserve Banks of Atlanta or Cleveland.

Part 1: The median CPI and other trimmed-mean estimators

A useful place to begin this conversation, I think, is with the following chart, which shows the monthly change in the Consumer Price Index (CPI) (through April).

The monthly CPI often swings between a negative reading and a reading in excess of 5 percent. In fact, in only about one-third of the readings over the past 16 years was the monthly, annualized seasonally adjusted CPI within a percentage point of 2 percent, which is the FOMC's longer-term inflation target. (Officially, the FOMC's target is based on the Personal Consumption Expenditures price index, but these and related observations hold for that price index equally well.)

How should the central bank think about its price-stability mandate within the context of these large monthly CPI fluctuations? For example, does April's 3.2 percent CPI increase argue that the FOMC ought to do something to beat back the inflationary threat? I don't speak for the FOMC, but I doubt it. More likely, there were some unusual price movements within the CPI's market basket that can explain why the April CPI increase isn't likely to persist. But the presumption that one can distinguish the price movements we should pay attention to from those that we should ignore is a risky business.

The Economist retells a conversation with Stephen Roach, who in the 1970s worked for the Federal Reserve under Chairman Arthur Burns. Roach remembers that when oil prices surged around 1973, Burns asked Federal Reserve Board economists to strip those prices out of the CPI "to get a less distorted measure. When food prices then rose sharply, they stripped those out too—followed by used cars, children's toys, jewellery, housing and so on, until around half of the CPI basket was excluded because it was supposedly 'distorted'" by forces outside the control of the central bank. The story goes on to say that, at least in part because of these actions, the Fed failed to spot the breadth of the inflationary threat of the 1970s.

I have a similar story. I remember a morning in 1991 at a meeting of the Federal Reserve Bank of Cleveland's board of directors. I was welcomed to the lectern with, "Now it's time to see what Mike is going to throw out of the CPI this month." It was an uncomfortable moment for me that had a lasting influence. It was my motivation for constructing the Cleveland Fed's median CPI.

I am a reasonably skilled reader of a monthly CPI release. And since I approached each monthly report with a pretty clear idea of what the actual rate of inflation was, it was always pretty easy for me to look across the items in the CPI market basket and identify any offending—or "distorted"—price change. Stripping these items from the price statistic revealed the truth—and confirmed that I was right all along about the actual rate of inflation.

Let me show you what I mean by way of the April CPI report. The next chart shows the annualized percentage change for each component in the CPI for that month. These are shown on the horizontal axis. The vertical axis shows the weight given to each of these price changes in the computation of the overall CPI. Taken as a whole, the CPI jumped 3.2 percent in April. But out there on the far right tail of this distribution are gasoline prices. They rose about 32 percent for the month. If you subtract out gasoline from the April CPI report, you get an increase of 2.1 percent. That's reasonably close to price stability, so we can stop there—mission accomplished.

But here's the thing: there is no such thing as a "nondistorted" price. All prices are being influenced by market forces and, once influenced, are also influencing the prices of all the other goods in the market basket.

What else is out there on the tails of the CPI price-change distribution? Lots of stuff. About 17 percent of things people buy actually declined in price in April while prices for about 13 percent of the market basket increased at rates above 5 percent.

But it's not just the tails of this distribution that are worth thinking about. Near the center of this price-change distribution is a very high proportion of things people buy. For example, price changes within the fairly narrow range of between 1.5 percent and 2.5 percent accounted for about 26 percent of the overall CPI market basket in the April report.

The April CPI report is hardly unusual. The CPI report is commonly one where we see a very wide range of price changes, commingled with an unusually large share of price increases that are very near the center of the price-change distribution. Statisticians call this a distribution with a high level of "excess kurtosis."

The following chart shows what an average monthly CPI price report looks like. The point of this chart is to convince you that the unusual distribution of price changes we saw in the April CPI report is standard fare. A very high proportion of price changes within the CPI market basket tends to remain close to the center of the distribution, and those that don't tend to be spread over a very wide range, resulting in what appear to be very elongated tails.

And this characterization of price changes is not at all special to the CPI. It characterizes every major price aggregate I have ever examined, including the retail price data for Brazil, Argentina, Mexico, Columbia, South Africa, Israel, the United Kingdom, Sweden, Canada, New Zealand, Germany, Japan, and Australia.

Why do price change distributions have peaked centers and very elongated tails? At one time, Steve Cecchetti and I speculated that the cost of unplanned price changes—called menu costs—discourage all but the most significant price adjustments. These menu costs could create a distribution of observed price changes where a large number of planned price adjustments occupy the center of the distribution, commingled with extreme, unplanned price adjustments that stretch out along its tails.

But absent a clear economic rationale for this unusual distribution, it presents a measurement problem and an immediate remedy. The problem is that these long tails tend to cause the CPI (and other weighted averages of prices) to fluctuate pretty widely from month to month, but they are, in a statistical sense, tethered to that large proportion of price changes that lie in the center of the distribution.

So my belated response to the Cleveland board of directors was the computation of the weighted median CPI (which I first produced with Chris Pike). This statistic considers only the middle-most monthly price change in the CPI market basket, which becomes the representative aggregate price change. The median CPI is immune to the obvious analyst bias that I had been guilty of, while greatly reducing the volatility in the monthly CPI report in a way that I thought gave the Federal Reserve Bank of Cleveland a clearer reading of the central tendency of price changes.

Cecchetti and I pushed the idea to a range of trimmed-mean estimators, for which the median is simply an extreme case. Trimmed-mean estimators trim some proportion of the tails from this price-change distribution and reaggregate the interior remainder. Others extended this idea to asymmetric trims for skewed price-change distributions, as Scott Roger did for New Zealand, and to other price statistics, like the Federal Reserve Bank of Dallas's trimmed-mean PCE inflation rate.

How much one should trim from the tails isn't entirely obvious. We settled on the 16 percent trimmed mean for the CPI (that is, trimming the highest and lowest 8 percent from the tails of the CPI's price-change distribution) because this is the proportion that produced the smallest monthly volatility in the statistic while preserving the same trend as the all-items CPI.

The following chart shows the monthly pattern of the median CPI and the 16 percent trimmed-mean CPI relative to the all-items CPI. Both measures reduce the monthly volatility of the aggregate price measure by a lot—and even more so than by simply subtracting from the index the often-offending food and energy items.

But while the median CPI and the trimmed-mean estimators are often referred to as "core" inflation measures (and I am guilty of this myself), these measures are very different from the CPI excluding food and energy.

In fact, I would not characterize these trimmed-mean measures as "exclusionary" statistics at all. Unlike the CPI excluding food and energy, the median CPI and the assortment of trimmed-mean estimators do not fundamentally alter the underlying weighting structure of the CPI from month to month. As long as the CPI price change distribution is symmetrical, these estimators are designed to track along the same path as that laid out by the headline CPI. It's just that these measures are constructed so that they follow that path with much less volatility (the monthly variance in the median CPI is about 95 percent smaller than the all-items CPI and about 25 percent smaller than the CPI less food and energy).

I think of the trimmed-mean estimators and the median CPI as being more akin to seasonal adjustment than they are to the concept of core inflation. (Indeed, early on, Cecchetti and I showed that the median CPI and associated trimmed-mean estimates also did a good job of purging the data of its seasonal nature.) The median CPI and the trimmed-mean estimators are noise-reduced statistics where the underlying signal being identified is the CPI itself, not some alternative aggregation of the price data.

This is not true of the CPI excluding food and energy, nor necessarily of other so-called measures of "core" inflation. Core inflation measures alter the weights of the price statistic so that they can no longer pretend to be approximations of the cost of living. They are different constructs altogether.

The idea of "core" inflation is one of the topics of tomorrow's post.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department

June 23, 2014 in Data Releases, Economic conditions, Inflation | Permalink


TrackBack URL for this entry:

Listed below are links to blogs that reference Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 1):


Or you aware that if you look at the NSA core CPI that over half of the annual increase normally occurs in the first quarter.

Normally, if the first quarter change in the NSA core CPI is smaller than in the prior year the annual increase will be smaller than in the prior year. The same thing holds if it is larger.

I would be happy to send you an excell file
with the data arranged to demonstrate this.

Posted by: Spencer | June 24, 2014 at 11:11 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search

Recent Posts

September 2017

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30



Powered by TypePad