The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.
- BLS Handbook of Methods
- Bureau of Economic Analysis
- Bureau of Labor Statistics
- Congressional Budget Office
- Economic Data - FRED® II, St. Louis Fed
- Office of Management and Budget
- Statistics: Releases and Historical Data, Board of Governors
- U.S. Census Bureau Economic Programs
- White House Economic Statistics Briefing Room
January 08, 2020
Is There a Taylor Rule for All Seasons?
In September 2016 we introduced the Taylor Rule Utility, a tool that allows a user to plot the federal funds rate against the prescription from an equation called the Taylor rule, shown below:
Broadly speaking, the Taylor rule translates readings of inflation (πt) and resource slack (gapt)—often measured by comparing real gross domestic product (GDP) or the unemployment rate to some measure of its "potential" or "natural" level—into a recommended setting for the fed funds rate. The default settings of the rule as of September 2016 (incorporated in the blue dashed line in the chart below) were, apart from some minor differences in variable choices, consistent with the settings used in John Taylor’s landmark 1993 paper that introduced the Taylor rule.
As the chart shows, for most of this decade, the funds rate prescription from this original Taylor rule consistently exceeded the actual rate by 1 to 3 percentage points, and as Wall Street Journal columnist Michael Derby noted last August, the prescription was well above the actual funds rate in the third quarter of 2019. Much of this difference can be explained by the setting of the natural (real) interest rate, or r*, in the above equation. Taylor set r* at 2 percent in his original rule based on average real GDP growth since 1984 and, according to estimates from the Laubach-Williams (LW) model, 2 percent continued to be a reasonable, if slightly low, estimate of r* up until the 2007–09 recession. Since 2009, estimates of r* from the LW model have generally hovered between 0 and 1 percentage point. Since July 2017, the semiannual Monetary Policy Report from the Board of Governors to Congress has included a section on monetary policy rules. And in these sections, r* has been estimated with the consensus long-run projection of a short-term interest rate from Blue Chip Economic Indicators. Since 2015, these Blue Chip interest rate projections have also been consistent with estimates of r* between 0 and 1 percent.
Setting r* to the LW model estimate (instead of 2 percent) in the Taylor rule results in a prescription corresponding to the solid blue line in the above chart. We can see this line is much closer to the actual fed funds rate for most of this decade. Nevertheless, it’s not clear that rules using LW-model estimates of r* and Congressional Budget Office (CBO) estimates of potential GDP or the natural unemployment rate are the most relevant for monetary policymakers. For example, in the December 2019 Summary of Economic Projections (SEP), the central tendency of Federal Open Market Committee (FOMC) participants’ longer-run projections of the unemployment rate was 3.9 to 4.3 percent. Conversely, the CBO’s latest estimate of the natural unemployment rate in the fourth quarter of 2019 rounds up to 4.6 percent, while its latest estimate of the natural rate in 2025 rounds up to 4.5 percent. The orange line in the chart above uses the FOMC/SEP longer-run projections of the fed funds rate and the unemployment rate.
Both the LW/CBO and FOMC/SEP variants of the Taylor 1993 rule prescribed an earlier "liftoff" of the fed funds rate than actually occurred. Former Fed chairs Ben Bernanke and Janet Yellen have sometimes referred to an alternative rule known as Taylor 1999. The FOMC/SEP Taylor 1999 rule, which puts twice as much weight on the resource gap as the FOMC/SEP Taylor 1993 rule, is the green line in the above chart that is identical to the orange line apart from a doubling of the resource gap coefficient in the above equation. This rule prescribed a later liftoff date than the other rules depicted in the chart. Because of the low unemployment rate, its current funds rate prescription is now above the rate that the FOMC/SEP 1993 rule prescribes.
By now, it’s probably clear that the answer to the question I posed in this blog post’s title is no, there is not a Taylor rule for all seasons—or at least not one that would satisfy everybody. For this reason, we have modified the interactive chart in our Taylor Rule Utility to show prescriptions from up to three versions of the Taylor rule. The default settings of these three rules in the interactive chart coincide exactly with the solid blue, orange, and green lines in the above figure. But you can modify all of the rules to generate, for example, the dashed blue Taylor 1993 line shown above. We hope that users find this a useful enhancement to the tool.
June 24, 2019
Mapping the Financial Frontier at the Financial Markets Conference
The Atlanta Fed recently hosted its 24th annual Financial Markets Conference, whose theme was Mapping the Financial Frontier: What Does the Next Decade Hold? The conference addressed a variety of issues pertinent to the future of the financial system. Among the sessions touching on macroeconomics was a keynote speech on corporate debt by Federal Reserve Board chair Jerome Powell and another on revitalizing America by Massachusetts Institute of Technology (MIT) professor Simon Johnson. The conference also included a panel discussion of the Fed's plans for implementing monetary policy in the future. This macroblog post reviews these macroeconomic discussions. A companion Notes from the Vault post reviews conference sessions on blockchain technology, data privacy, and postcrisis developments in the markets for mortgage backed securities.
Chair Powell's thoughts on corporate debt levels
Chair Powell's keynote speech focused on the risks posed by increases in corporate debt levels. In his speech, titled "Business Debt and Our Dynamic Financial System" (which you can watch or read), Powell began by observing that business debt levels have increased by a variety of measures including the ratios of debt to gross domestic product as well as the debt to the book value of corporate assets. These higher debt ratios alone don't currently pose a problem because corporate profits are high and interest rates are low. Powell noted some reasons for concern, however, including the reduced average quality of investment-grade bonds, with more corporate debt concentrated in the "lowest rating—a phenomenon known as the 'triple-B cliff'".
Powell noted several differences between the recent increase in corporate debt and the increase in household debt prior to the 2007–09 crisis that offset these risks. These differences include a more moderate rate of increase in corporate debt, the lack of a feedback loop from debt levels to asset prices, reduced leverage in the banking system, and less liquidity risk.
Powell concluded his remarks by saying that although business debt does pose a risk of amplifying a future downturn, it does not appear to pose "notable risks to financial stability." Finally, he noted that the Fed is working toward a more thorough understand of the risks.
Simon Johnson on jumpstarting America
Simon Johnson started his keynote speech by discussing Amazon's search for a second headquarters city. The company received proposals from 238 cities across the country (and Canada). However, in the end, it selected two large metropolitan areas—New York and Washington, DC—that were already among the leaders in creating new tech jobs. Although many places around the country want growth in good jobs, he said the innovation economy is "drawn disproportionately to these few places."
Johnson's remedy for this disproportionate clustering is for the federal government to make a deliberate effort to encourage research and development in various technical areas at a number of research universities around the country. This proposal is based on his book with fellow MIT economist Jonathan Gruber. They argue that the proposal encourages "exactly what the U.S. did in the '40s, '50s, and '60s," which was to help the United States develop new technology to be used in World War II and the Cold War.
Johnson proposed that the funding for new technical projects be allocated through a nationwide competition that intentionally seeks to create new tech hubs. In making his case, Johnson observed that the view that "all the talent is just in six places is fundamentally wrong." Johnson said that he and his coauthor found 102 cities in 36 states that have a substantial proportion of college graduates and relatively low housing prices. Moreover, Johnson observed that existing tech centers' cost of living has become very high, and those cities have substantial political limits on their ability to sustain new population growth. If some of these 102 potential hubs received the funding to start research and provide capital to business, Johnson argued, overall growth in the United States could increase and be more evenly distributed.
Discussing the implementation of monetary policy
The backdrop for the session on monetary policy implementation was postcrisis developments in the Fed's approach to implementing monetary policy. As the Fed's emergency lending programs started to recede after the crisis, it started making large-scale investments in agency mortgage backed securities and U.S. Treasuries. This program, widely (though somewhat misleadingly) called "quantitative easing," or QE, pumped additional liquidity into securities markets and played a role in lowering longer-term interest rates. As economic conditions improved, the Fed first started raising short-term rates and then adopted a plan to shrink its balance sheet starting in 2018. However, earlier this year, the Fed announced plans to stop shrinking the balance sheet in September if the economy performs as it expected.
Julia Coronado, president of MacroPolicy Perspectives, led the discussion of the Fed's plans, and a large fraction of that discussion addressed its plans for the size of the balance sheet. Kevin Warsh, former Federal Reserve governor and currently a visiting fellow at Stanford University's Hoover Institution, provided some background information on the original rationale for QE, when many financial markets were still rather illiquid. However, he argued that those times were extraordinary and that "extraordinary tools are meant for extraordinary circumstances." He further expressed the concern that using QE at other times and for other reasons, such as in response to regulatory policy, would increase the risk of political involvement in monetary policy.
During the discussion, Chicago Fed president Charles Evans argued that QE is likely to remain a necessary part of the Fed's toolkit. He observed that slowing labor force growth, moderate productivity growth, and low inflation are likely to keep equilibrium short-term interest rates low. As a result, the Fed's ability to lower interest rates in a future recession is likely to remain constrained, meaning that balance sheet expansion will remain a necessary tool for economic stimulus.
Ethan Harris, head of global economics research at Bank of America Merrill Lynch, highlighted the potential stress the next downturn would place on the Fed. Harris observed that "other central banks have virtually no ammunition" to fight the next downturn, a reference to the negative policy rates and relatively larger balance sheets of some other major central banks. This dynamic prompted his question, "How is the Fed, on its own, going to fight the next crisis?"
The conference made clear the importance of the links between financial markets and the macroeconomy, and this blog post focused on just three of them. I encourage you to delve into the rest of the conference materials to see these and other important discussions.
January 16, 2019
I didn't coin the title of this blog post. It was the label on a chart of the Federal Reserve's balance sheet that appeared in an issue of The Wall Street Journal last week. I've led with this phrase because it does seem to capture some of the sentiment around what has become the elephant in the monetary policy room: Is the rundown in the size of the Fed's balance sheet causing an unanticipated, and unwarranted, tightening of monetary policy conditions? I think the answer to that question is "no." Let me explain why.
In June 2017, the Federal Open Market Committee (FOMC) determined that it was appropriate to begin the process of reducing the size of the Fed's balance sheet, which had more than quadrupled as a result of efforts to combat the financial crisis and support the subsequent recovery from a very deep recession.
As I noted in a speech last November, I see the Committee's strategy for shrinking the balance sheet as having two essential elements.
- First, the normalization process is designed to be gradual. It was phased in over the course of about a year and a half and is now subject to monthly caps so the run-down is not too rapid.
- Second, the normalization process is designed to be as predictable as possible. The schedule of security retirements was announced in advance so that uncertainty about the pace of normalization can be minimized. (In other words, "quantitative tightening" is decidedly not on the QT.) As a result, the normalization process also reduces complexity. Balance-sheet reduction has moved into the background so that ongoing policy adjustments can focus solely on the traditional interest-rate channel.
In his recent remarks at the annual meeting of the American Economic Association, Chairman Jerome Powell was very clear that the fairly mechanical balance-sheet strategy adopted by the FOMC thus far should not be interpreted as inflexibility in the conduct of monetary policy or an unwillingness to recognize that balance-sheet reduction is in fact monetary policy tightening.
I will speak for myself. Balance-sheet policy is an element of the monetary policy mix. The decision to adopt a relatively deterministic approach to balance-sheet reduction is not a decision to ignore the possibility that it has led or might lead to a somewhat more restrictive stance of monetary policy. It is a decision to make whatever adjustments are necessary through the Fed's primary interest-rate tools to the greatest extent possible.
I maintain that there is still wisdom in this approach. The effects of our interest-rate tools are much more familiar to both policymakers and markets than balance-sheet tools are. That, to my mind, makes them the superior instrument for reaching and maintaining our dual goals of stable inflation and maximum employment. It is my belief that reducing the number of moving pieces makes monetary policy more transparent and predictable, which enhances the Committee's capacity for a smooth transition toward those goals.
It should now be clear that nothing is written in stone. Whether the FOMC uses active interest-rate policy with passive balance-sheet policy or uses both instruments actively, policy decisions will ultimately be driven by the facts on the ground as best Committee members can judge, and by assessments of risks that surround those judgments.
In my own judgment, it is far from clear that the ongoing reduction in the balance sheet is having an outsized impact on the stance of monetary policy. I think it is widely accepted that one of the ways balance-sheet policies work is by affecting the term premia associated with holding longer-term securities. (There are many good discussions about balance-sheet mechanisms, including this one by Edison Yu, which can be found in the first quarter 2016 edition of the Philadelphia Fed's Economic Insights, or this article by Jane Ihrig, Elizabeth Klee, Canlin Li, Min Wei, and Joe Kachovec in the March 2018 issue of the International Journal of Central Banking.)
Lots of things can push term premia up and down. But one of the factors is the presumed willingness of the central bank to purchase long-term securities in scale—or not. The idea that running down the balance sheet tightens monetary policy is that, in so doing, the FOMC is removing a crucial measure of support to the bond market. This makes longer-term securities riskier by transferring more duration risk back to the market, which raises term premia and, all else equal, pushes rates higher.
Although estimating term premia is as much art as science, I don't think the evidence supports the argument that these premia have been materially rising as a result of our normalization process. The New York Fed publishes one well-known real-time estimate of the term premia associated with 10-year Treasury securities. But isolating and quantifying the effect of balance-sheet changes on term premia is challenging. It is possible that a number of factors, such as the continued high demand for U.S. Treasuries by financial institutions and a low inflation risk premium, might have dampened the independent effect of balance sheet run-off. But if the term premia channel is a critical piece of what makes balance-sheet policy work, I'm hard pressed to see much evidence of financial tightening via rising term premia in the data so far.
Lest anyone think I am overly influenced by one particular theory, I will emphasize that I am not taking anything for granted. In addition to my monitoring of developments on Main Street, I will be watching financial conditions and term premia as I assess the outlook for the economy. My view is that a patient approach to monetary policy adjustments in the coming year is fully warranted in light of the uncertainties about the state of the economy, about what level of policy rates is consistent with a neutral stance, and about the overall impact of balance-sheet normalization. This patience is one of the characteristics of what I mean by data dependence.
October 26, 2018
On Maximizing Employment, a Case for Caution
Over the past few months, I have been asked one question regularly: Why is the Fed removing monetary policy stimulus when there is little sign that inflation has run amok and threatens to undermine economic growth? This is a good question, and it speaks to a philosophy of how to maintain the stability of both economic performance and prices, which I view as important for the effective implementation of monetary policy.
In assessing the degree to which the Fed is achieving the congressional mandate of price stability, the Federal Open Market Committee (FOMC) identified 2 percent inflation in consumption prices as a benchmark—see here for more details. Based on currently available data, it seems that inflation is running close to this benchmark.
The Fed's other mandate from Congress is to foster maximum employment. A key metric for performance relative to that mandate is the official unemployment rate. So, when some people ask why the FOMC is reducing monetary policy stimulus in the absence of clear inflationary pressure, what they really might be thinking is, "Why doesn't the Fed just conduct monetary policy to help the unemployment rate go as low as physically possible? Isn't this by definition the representation of maximum employment?"
While this is indeed one definition of full employment, I think this is a somewhat short-sighted perspective that doesn't ultimately serve the economy and American workers well. One important reason for being skeptical of this view is our nation's past experience with "high-pressure" economic periods. High-pressure periods are typically defined as periods in which the unemployment rate falls below the so-called natural rate—using an estimate of the natural rate, such as the one produced by the Congressional Budget Office (CBO).
As the CBO defines it, the natural rate is "the unemployment rate that arises from all sources other than fluctuations in demand associated with business cycles." These "other sources" include frictions like the time it takes people to find a job or frictions that result from a mismatch between the set of skills workers currently possess and the set of skills employers want to find.
When the actual unemployment rate declines substantially below the natural rate—highlighted as the red areas in the following chart—the economy has moved into a "high-pressure period."
For the purposes of this discussion, the important thing about high-pressure economies is that, virtually without exception, they are followed by a recession. Why? Well, as I described in a recent speech:
"One view is that it is because monetary policy tends to take on a much more 'muscular' stance—some might say too muscular—at the end of these high-pressure periods to combat rising nominal pressures.
"The other alternative is that the economy destabilizes when it pushes beyond its natural potential. These high-pressure periods lead to a buildup of competitive excesses, misdirected investment, and an inefficient allocation of societal resources. A recession naturally results and is needed to undo all the inefficiencies that have built up during the high-pressure period.
"Yet, some people suggest that deliberately running these high-pressure periods can improve outcomes for workers in communities who have been less attached to the labor market, such as minorities, those with lower incomes, and those living in rural communities. These workers have long had higher unemployment rates than other workers, and they are often the last to benefit from periods of extended economic growth.
"For example, the gap between the unemployment rates of minority and white workers narrows as recoveries endure. So, the argument goes, allowing the economy to run further and longer into these red areas on the chart provides a net benefit to these under-attached communities.
"But the key question isn't whether the high-pressure economy brings new people from disadvantaged groups into the labor market. Rather, the right question is whether these benefits are durable in the face of the recession that appears to inevitably follow.
"This question was explored in a research paper by Atlanta Fed economist Julie Hotchkiss and her research colleague Robert Moore. Unfortunately, they found that while workers in these aforementioned communities tend to experience greater benefits from these high-pressure periods, the pain and dislocation associated with the aftermath of the subsequent recession is just as significant, if not more so.
"Importantly, this research tells me we ought to guard against letting the economy slip too far into these high-pressure periods that ultimately impose heavy costs on many people across the economy. Facilitating a prolonged period of low—and sustainable—unemployment rates is a far more beneficial approach."
In short, I conclude that the pain inflicted from shifting from a high-pressure to a low-pressure economy is too great, and this tells me that it is important for the Fed to beware the potential for the economy overheating.
Formulating monetary policy would all be a lot easier, of course, if we were certain about the actual natural rate of unemployment. But we are not. The CBO has an estimate—currently 4.5 percent. The FOMC produces projections, and other forecasters produce estimates of what it thinks the unemployment rate would be over the longer run.
For my part, I estimate that the natural rate is closer to 4 percent, and given the current absence of accelerating inflationary pressures, we can't completely dismiss the possibility that the natural rate is even lower. Nonetheless, with the unemployment rate currently at 3.7 percent, it seems likely that we're at least at our full employment mandate.
So, what is this policymaker to do? Back to my speech:
"My thinking will be informed by the evolution of the incoming data and from what I'm able to glean from my business contacts. And while I wrestle with that choice, one thing seems clear: there is little reason to keep our foot on the gas pedal."
August 23, 2018
What Does the Current Slope of the Yield Curve Tell Us?
As I make the rounds throughout the Sixth District, one of the most common questions I get these days is how Federal Open Market Committee (FOMC) participants interpret the flattening of the yield curve. I, of course, do not speak for the FOMC, but as the minutes from recent meetings indicate, the Committee has indeed spent some time discussing various views on this topic. In this blog post, I'll share some of my thoughts on the framework I use for interpreting the yield curve and what I'll be watching. Of course, these are my views alone and do not reflect the views of any other Federal Reserve official.
Many observers see a downward-sloping, or "inverted," yield curve as a reliable predictor for a recession. Chart 1 shows the yield curve's slope—specifically, the difference between the interest rates paid on 10-year and 2-year Treasury securities—is currently around 20 basis points. This is lowest spread since the last recession.
The case for worrying about yield-curve flattening is apparent in the chart. The shaded bars represent recessionary periods. Both of the last two recessions were preceded by a flat (and, for a time, inverted) 10-year/2-year spread.
As we all know, however, correlation does not imply causality. This is a particularly important point to keep in mind when discussing the yield curve. As a set of market-determined interest rates, the yield curve not only reflects market participants' views about the evolution of the economy but also their views about the FOMC's likely reaction to that evolution and uncertainty around these and other relevant factors. In other words, the yield curve represents not one signal, but several. The big question is, can we pull these signals apart to help appropriately inform the calibration of policy?
We can begin to make sense of this question by noting that Treasury yields of any given maturity can be thought of as the sum of two fundamental components:
- An expected policy rate path over that maturity: the market's best guess about the FOMC's rate path over time and in response to the evolution of the economy.
- A term premium: an adjustment (relative to the path of the policy rate) that reflects additional compensation investors receive for bearing risk related to holding longer-term bonds.
Among other things, this premium may be related to two factors: (1) uncertainty about how the economy will evolve over that maturity and how the FOMC might respond to events as they unfold and (2) the influence of supply and demand factors for U.S. Treasuries in a global market.
Let's apply this framework to the current yield curve. As several of my colleagues (including Fed governor Lael Brainard) have noted, the term premium is currently quite low. All else equal, this would result in lower long-term rates and a flatter yield curve. The term premium bears watching, but it is unclear that movements in the premium reflect particular concerns about the course of the economy.
I tend to focus on the other component: the expected path of policy. When we ask whether a flattening yield curve is a cause for concern, what we are really asking is: does the market expect an economic slowdown that will require the FOMC to reverse course and lower rates in the near future?
The eurodollar futures market shows us one measure of the market's expectation for the policy rate path. These derivative contracts are quoted in terms of a three-month rate that closely follows the FOMC's policy rate, which makes them well-suited for this kind of analysis. (Some technical details regarding this market can be found in a 2016 issue of the Atlanta Fed's "Notes from the Vault.")
Chart 2 illustrates the current estimate of the market's expected policy rate path. Read simply, the market appears to be forecasting continuing policy rate increases through 2020, and there is no evidence of a market forecast that the FOMC will need to reverse course in the medium term. However, the level of the policy rate is lower than the median of the FOMC's June Summary of Economic Projections (SEP) for 2019 and 2020.
Once we get past 2020, the market's expected policy path flattens. I read this as evidence that market participants overall expect a very gradual pace of tightening as the most likely outcome over the next two years. Interestingly, the market appears to expect a slower pace of tightening than the pace that at least some members of the FOMC currently view as "appropriate" as represented in their SEP submissions.
For this measure, I find the short-term perspective most informative. As one looks further into the future, the range of possible outcomes widens, as many the factors that influence the economy can evolve and interact widely. Thus, the precision of any signal the market is providing about policy expectations—if indeed there is any signal at all—is likely to be quite low.
With this information in mind, I do not interpret that the yield curve indicates that the market believes the evolution of the economy will cause the FOMC to lower rates in the foreseeable future. This interpretation is consistent with my own economic forecast, gleaned from macroeconomic data and a robust set of conversations with businesses both large and small. My modal outlook is for expansion to continue at an above-trend pace for the next several quarters, and I see the risks to that projection as balanced. Yes, there are downside risks, chief among them the effects of (and uncertainty about) trade policy. But those risks are countered by the potential for recent fiscal stimulus to have a much more transformative impact on the economy than I've marked into my baseline outlook.
I believe the yield curve gives us important and useful information about market participants' forecasts. But it is only one signal among many that we use for the complex task of forecasting growth in the U.S. economy. As the economy evolves, I will be assessing the response of the yield curve to incoming data and policy decisions along the lines I've laid out here, incorporating market signals along with a constellation of other information to achieve the FOMC's dual objectives of price stability and maximum employment.
April 19, 2017
The Fed’s Inflation Goal: What Does the Public Know?
The Federal Open Market Committee (FOMC) has had an explicit inflation target of 2 percent since January 25, 2012. In its statement announcing the target, the FOMC said, "Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee's ability to promote maximum employment in the face of significant economic disturbances."
If communicating this goal to the public enhances the effectiveness of monetary policy, one natural question is whether the public is aware of this 2 percent target. We've posed this question a few times to our Business Inflation Expectations Panel, which is a set of roughly 450 private, nonfarm firms in the Southeast. These firms range in size from large corporations to owner operators.
Last week, we asked them again. Specifically, the question is:
What annual rate of inflation do you think the Federal Reserve is aiming for over the long run?
Unsurprisingly, to us at least—and maybe to you if you're a regular macroblog reader—the typical respondent answered 2 percent (the same answer our panel gave us in 2015 and back in 2011). At a minimum, southeastern firms appear to have gotten and retained the message.
So, why the blog post? Careful Fed watchers noticed the inclusion of a modifier to describe the 2 percent objective in the March 2017 FOMC statement (emphasis added): "The Committee will carefully monitor actual and expected inflation developments relative to its symmetric inflation goal." And especially eagle-eyed Fed watchers will remember that the Committee amended its statement of longer-run goals in January 2016, clarifying that its inflation objective is indeed symmetric.
The idea behind a symmetric inflation target is that the central bank views both overshooting and falling short of the 2 percent target as equally bad. As then Minneapolis Fed President Kocherlakota stated in 2014, "Without symmetry, inflation might spend considerably more time below 2 percent than above 2 percent. Inflation persistently below the 2 percent target could create doubts in households and businesses about whether the FOMC is truly aiming for 2 percent inflation, or some lower number."
Do such doubts actually exist? In a follow-up to our question about the numerical target, in the latest survey we asked our panel whether they thought the Fed was more, less, or equally likely to tolerate inflation below or above its targe. The following chart depicts the responses.
One in five respondents believes the Federal Reserve is more likely to accept inflation above its target, while nearly 40 percent believe it is more likely to accept inflation below its target. Twenty-five percent of firms believe the Federal Reserve is equally likely to accept inflation above or below its target. The remainder of respondents were unsure. This pattern was similar across firm sizes and industries.
In other words, more firms see the inflation target as a threshold (or ceiling) that the Fed is averse to crossing than see it as a symmetric target.
Lately, various Committee members (here, here, and in Chair Yellen's latest press conference at the 42-minute mark) have discussed the symmetry about the Committee's inflation target. Our evidence suggests that the message may not have quite sunk in yet.
September 08, 2016
Introducing the Atlanta Fed's Taylor Rule Utility
Simplicity isn't always a virtue, but when it comes to complex decision-making processes—for example, a central bank setting a policy rate—having simple benchmarks is often helpful. As students and observers of monetary policy well know, the common currency in the central banking world is the so-called "Taylor rule."
The Taylor rule is an equation introduced by John Taylor in a seminal 1993 paper that prescribes a value for the federal funds rate—the interest rate targeted by the Federal Open Market Committee (FOMC)—based on readings of inflation and the output gap. The output gap measures the percentage point difference between real gross domestic product (GDP) and an estimate of its trend or potential.
Since 1993, academics and policymakers have introduced and used many alternative versions of the rule. The alternative forms of the rule can supply policy prescriptions that differ significantly from Taylor's original rule, as the following chart illustrates.
The green line shows the policy prescription from a rule identical to the one in Taylor's paper, apart from some minor changes in the inflation and output gap measures. The red line uses an alternative and commonly used rule that gives the output gap twice the weight used for the Taylor (1993) rule, derived from a 1999 paper by John Taylor. The red line also replaces the 2 percent value used in Taylor's 1993 paper with an estimate of the natural real interest rate, called r*, from a paper by Thomas Laubach, the Federal Reserve Board's director of monetary affairs, and John Williams, president of the San Francisco Fed. Federal Reserve Chair Janet Yellen also considered this alternative estimate of r* in a 2015 speech.
Both rules use real-time data. The Taylor (1993) rule prescribed liftoff for the federal funds rate materially above the FOMC's 0 to 0.25 percent target range from December 2008 to December 2015 as early as 2012. The alternative rule did not prescribe a positive fed funds rate since the end of the 2007–09 recession until this quarter. The third-quarter prescriptions incorporate nowcasts constructed as described here. Neither the nowcasts nor the Taylor rule prescriptions themselves necessarily reflect the outlook or views of the Federal Reserve Bank of Atlanta or its president.
Additional variables that get plugged into this simple policy rule can influence the rate prescription. To help you sort through the most common variations, we at the Atlanta Fed have created a Taylor Rule Utility. Our Taylor Rule Utility gives you a number of choices for the inflation measure, inflation target, the natural real interest rate, and the resource gap. Besides the Congressional Budget Office–based output gap, alternative resource gap choices include those based on a U-6 labor underutilization gap and the ZPOP ratio. The latter ratio, which Atlanta Fed President Dennis Lockhart mentioned in a November 2015 speech while addressing the Taylor rule, gauges underemployment by measuring the share of the civilian population working their desired number of hours.
Many of the indicator choices use real-time data. The utility also allows you to establish your own weight for the resource gap and whether you want the prescription to put any weight on the previous quarter's federal funds rate. The default choices of the Taylor Rule Utility coincide with the Taylor (1993) rule shown in the above chart. Other organizations have their own versions of the Taylor Rule Utility (one of the nicer ones is available on the Cleveland Fed's Simple Monetary Policy Rules web page). You can find more information about the Cleveland Fed's web page on the Frequently Asked Questions page.
Although the Taylor rule and its alternative versions are only simple benchmarks, they can be useful tools for evaluating the importance of particular indicators. For example, we see that the difference in the prescriptions of the two rules plotted above has narrowed in recent years as slack has diminished. Even if the output gap were completely closed, however, the current prescriptions of the rules would differ by nearly 2 percentage points because of the use of different measures of r*. We hope you find the Taylor Rule Utility a useful tool to provide insight into issues like these. We plan on adding further enhancements to the utility in the near future and welcome any comments or suggestions for improvements.
September 21, 2015
What Do U.S. Businesses Know that New Zealand Businesses Don't? A Lot (Apparently).
A recent paper presented at the Brookings Institute, picked up by the Financial Times and the Washington Post, suggests that when it comes to communicating their inflation objective, central banks have a lot of work to do. This conclusion is based primarily on two pieces of evidence.
The first piece is that when businesses in New Zealand are asked about their expectations for changes in "overall prices"—which presumably corresponds with their inflation expectation—the responses, on average, appear to be much too high relative to observed inflation trends. And the responses vary widely from business to business. According to this survey, the average firm in New Zealand expects 4 to 5 percent inflation on a year-ahead basis, and 3.5 percent inflation over the next five to 10 years. Those expectations are for the average firm. Apparently, about one in four firms in New Zealand think inflation in the year ahead will be more than 5 percent, and about one in six firms believe inflation will top 5 percent during the next five to 10 years. Certainly, these aren't the responses one would expect from businesses operating in an economy (like New Zealand) where the central bank has been targeting 2 percent inflation for the past 13 years, over which time inflation has averaged only 2.2 percent (and a mere 0.9 percent during the past four years).
But count us skeptical of this evidence. In this paper from last year, we challenge the assumption that asking firms (or households, for that matter) about expected changes in "overall prices" corresponds to an inflation prediction.
The second piece of evidence regarding the ineffectiveness of inflation targeting is more direct—the authors of this paper actually asked New Zealand businesses a few questions about the central bank and its policies, including this one:
What annual percentage rate of change in overall prices do you think the Reserve Bank of New Zealand is trying to achieve? (Answer: ______%)
The distribution of answers by New Zealand firms is shown in the chart below. According to the survey, the median New Zealand firm appears to think the central bank's inflation target is 5 percent. Indeed, more than a third of firms in New Zealand reported that they think the central bank is targeting an inflation rate greater than 5 percent. Only about 12 percent of the firms were able to correctly identify their central bank's actual inflation target of 2 percent (actually, the New Zealand inflation target is a range of between 1 and 3 percent, centered on 2 percent).
If this weren't embarrassing enough for central bankers, the study also reports that New Zealand households (like U.S. households) don't seem to know who the head of the central bank is. In fact, the authors show that there are more online searches for "puppies" than for information about macroeconomic variables.
OK, to be honest, we don't find that last result very surprising. Puppies are adorable. Central bankers? Not so much. But we were very surprised to see just how high and wide-ranging businesses in New Zealand perceived their central bank's inflation target to be. We're surprised because that bit of information doesn't fit with our understanding of U.S. firms.
In December 2011, the month before the Fed officially announced an explicit numerical target for inflation, we wanted to know whether firms had already formed an opinion about the Fed's inflation objective. So we asked a panel of Southeast businesses the following question:
What we learned was that 16 percent of the 151 firms who responded to our survey had no opinion regarding what rate of inflation the Federal Reserve was aiming for. But of the firms that had an opinion, 58 percent identified a 2 percent inflation target.
But perhaps this isn't a fair comparison to the recent survey of New Zealand businesses. In our 2011 survey, firms had only six options to choose from (including "no opinion"). It could be that our choice of options biased the responses away from high inflation values. So last week, we convened another panel of firms and asked the question in the same open-ended format given to New Zealanders:
What annual rate of inflation do you think the Federal Reserve is aiming for over the long run? (Answer: ______%)
The only material distinction between their question and ours is that we substituted the word "inflation" for the phrase "changes in overall prices." (For this special survey, we polled a national sample of firms that had never before answered one of our survey questions.) The chart below shows what we found relative to the results recently reported for New Zealand firms.
Our survey results look very similar to our results of four years ago. About one in five of the 102 firms that answered our survey was unsure about the Fed's inflation target. But almost 53 percent of the firms that responded answered 2 percent. (On average, U.S. firms judged the central bank's inflation target to be 2.2 percent, just a shade higher than our actual target.)
Furthermore, the distribution of responses to our survey was very tightly centered on 2 percent. The highest estimate of the Fed's inflation target (from only one firm) was 5 percent. So again, our results don't at all resemble what has been reported for the firms down under.
Why is there a glaring difference between what the survey of New Zealand firms found and what we're finding? Well, as noted earlier, we've got our suspicions, but we'll keep studying the issue. And in the meantime, have you seen this?
Editor's note: Learn more about inflation and the consumer price index in an ECONversations webcast featuring Atlanta Fed economist Brent Meyer.
September 04, 2015
5-Year Deflation Probability Moves Off Zero
Since 2010, our Bank has regularly posted 5-year deflation probabilities derived from prices of Treasury Inflation-Protected Securities (TIPS) on our Deflation Probabilities web page. Each deflation probability, which measures the likelihood of a decline in the Consumer Price Index over a fixed five-year window, is estimated by comparing the price of a recently issued 5-year TIPS with a 10-year TIPS issued about five years earlier. Because the 5-year TIPS has more "deflation protection" than the 10-year TIPS, the implied deflation probability rises when the 5-year TIPS becomes more valuable relative to the 10-year TIPS. (See this macroblog post for a more detailed explanation, or this appendix with the mathematical details.)
From early September 2013 to the first week of August 2015, the five-year deflation probability estimated with the most recently issued 5-year TIPS was identically 0 as the chart shows.
Of course, we should not interpret this long period of zero probability of deflation too literally. It could easily be the case that the "true" deflation probability was slightly above zero but that confounding factors—such as differences in the coupon rates, maturity dates, or liquidity of the TIPS issues—prevented the model from detecting it.
Since August 11, however, the deflation probability has had its own "liftoff" of sorts, fluctuating between 0.0 and 1.3 percent over the 16-day period ending August 26 before rising steadily to 4.1 percent on September 2. Of course, this rise off zero could be temporary, as it proved to be in the summer of 2013.
How seriously should we take this recent liftoff? We can look at options prices on Consumer Price Index inflation (inflation caps and floors) to get a full probability distribution for future inflation; see this published article by economists Yuriy Kitsul and Jonathan Wright or a nontechnical summary in this New York Times article. An alternative is simply to ask professional forecasters for their subjective probabilities of inflation falling within various ranges like "1.0 to 1.4 percent," "1.5 to 1.9 percent," and so forth. The Philly Fed's Survey of Professional Forecasters does just this, with the chart below showing probabilities of low inflation for the Consumer Price Index excluding food and energy (core CPI) from each of the August surveys since 2007.
Although the price index, and the horizon for the inflation outcome, differs from the TIPS-based deflation probability, we see that the shape of the curves is broadly similar to the one shown in the first chart. In the most recent survey, the probability that next year's core CPI inflation rate will be low was small and not particularly elevated relative to recent history. However, the deadline date for this survey was August 11, before liftoff in either the TIPS-based deflation probability or the recent volatility in global financial markets. So stay tuned.
November 04, 2014
Data Dependence and Liftoff in the Federal Funds Rate
When asked "at which upcoming meeting do you think the FOMC [Federal Open Market Committee] will FIRST HIKE its target for the federal funds rate," 46 percent of the October Blue Chip Financial Forecasts panelists predicted that "liftoff" would occur at the June 2015 meeting, and 83 percent chose liftoff at one of the four scheduled meetings in the second and third quarters of next year.
Of course, this result does not imply that there is an 83 percent chance of liftoff occurring in the middle two quarters of next year. Respondents to the New York Fed's most recent Primary Dealer Survey put this liftoff probability for the middle two quarters of 2015 at only 51 percent. This more relatively certain forecast horizon for mid-2015 is consistent with the "data-dependence principle" that Chair Yellen mentioned at her September 17 press conference. The idea of data dependence is captured in this excerpt from the statement following the October 28–29 FOMC meeting:
[I]f incoming information indicates faster progress toward the Committee's employment and inflation objectives than the Committee now expects, then increases in the target range for the federal funds rate are likely to occur sooner than currently anticipated. Conversely, if progress proves slower than expected, then increases in the target range are likely to occur later than currently anticipated.
If the timing of liftoff is indeed data dependent, a natural extension is to gauge the likely "liftoff reaction function." In the current zero-lower bound (ZLB) environment, researchers at the University of North Carolina and the St. Louis Fed have analyzed monetary policy using shadow fed funds rates, shown in figure 1 below, estimated by Wu and Xia (2014) and Leo Krippner.
Unlike the standard fed funds rate, a shadow rate can be negative at the ZLB. The researchers found that the shadow rates, particularly Krippner's, act as fairly good proxies for monetary policy in the post-2008 ZLB period. Krippner also produces an expected time to liftoff, estimated from his model, shown in figure 1 above. His model's liftoff of December 2015 is six months after the most likely liftoff month identified by the aforementioned Blue Chip survey.
I included Krippner's shadow rate (spliced with the standard fed funds rate prior to December 2008) in a monthly Bayesian vector autoregression alongside the six other variables shown in figure 2 below.
The model assumes that the Fed cannot see contemporaneous values of the variables when setting the spliced policy—that is, the fed funds/shadow rate. This assumption is plausible given the approximately one-month lag in economic release dates. The baseline path assumes (and mechanically generates) liftoff in June 2015 with outcomes for the other variables, shown by the black lines, that roughly coincide with professional forecasts.
The alternative scenarios span the range of eight possible outcomes for low inflation/baseline inflation/high inflation and low growth/baseline growth/high growth in the figures above. For example, in figure 2 above, the high growth/low inflation scenario coincides with the green lines in the top three charts and the red lines in the bottom three charts. Forecasts for the spliced policy rate are conditional on the various growth/inflation scenarios, and "liftoff" in each scenario occurs when the spliced policy rate rises above the midpoint of the current target range for the funds rate (12.5 basis points).
The outcomes are shown in figure 3 below. At one extreme—high growth/high inflation—liftoff occurs in March 2015. At the other—low growth/low inflation—liftoff occurs beyond December 2015.
One should not interpret these projections too literally; the model uses a much narrower set of variables than the FOMC considers. Nonetheless, these scenarios illustrate that the model's forecasted liftoffs in the spliced policy rate are indeed consistent with the data-dependence principle.
By Pat Higgins, senior economist in the Atlanta Fed's research department
TrackBack URL for this entry:
Listed below are links to blogs that reference Data Dependence and Liftoff in the Federal Funds Rate:
- Do Higher Wages Mean Higher Standards of Living?
- Is There a Taylor Rule for All Seasons?
- Faster Wage Growth for the Lowest-Paid Workers
- Is Job Switching on the Decline?
- Private and Central Bank Digital Currencies
- New Evidence Points to Mounting Trade Policy Effects on U.S. Business Activity
- Digging into Older Americans’ Flat Participation Rate
- What the Wage Growth of Hourly Workers Is Telling Us
- Making Analysis of the Current Population Survey Easier
- Mapping the Financial Frontier at the Financial Markets Conference
- January 2020
- December 2019
- November 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- March 2019
- February 2019
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Economic conditions
- Economic Growth and Development
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Health Care
- Inflation Expectations
- Interest Rates
- Labor Markets
- Latin America/South America
- Monetary Policy
- Money Markets
- Real Estate
- Saving, Capital, and Investment
- Small Business
- Social Security
- This, That, and the Other
- Trade Deficit
- Wage Growth