The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.
- BLS Handbook of Methods
- Bureau of Economic Analysis
- Bureau of Labor Statistics
- Congressional Budget Office
- Economic Data - FRED® II, St. Louis Fed
- Office of Management and Budget
- Statistics: Releases and Historical Data, Board of Governors
- U.S. Census Bureau Economic Programs
- White House Economic Statistics Briefing Room
August 23, 2018
What Does the Current Slope of the Yield Curve Tell Us?
As I make the rounds throughout the Sixth District, one of the most common questions I get these days is how Federal Open Market Committee (FOMC) participants interpret the flattening of the yield curve. I, of course, do not speak for the FOMC, but as the minutes from recent meetings indicate, the Committee has indeed spent some time discussing various views on this topic. In this blog post, I'll share some of my thoughts on the framework I use for interpreting the yield curve and what I'll be watching. Of course, these are my views alone and do not reflect the views of any other Federal Reserve official.
Many observers see a downward-sloping, or "inverted," yield curve as a reliable predictor for a recession. Chart 1 shows the yield curve's slope—specifically, the difference between the interest rates paid on 10-year and 2-year Treasury securities—is currently around 20 basis points. This is lowest spread since the last recession.
The case for worrying about yield-curve flattening is apparent in the chart. The shaded bars represent recessionary periods. Both of the last two recessions were preceded by a flat (and, for a time, inverted) 10-year/2-year spread.
As we all know, however, correlation does not imply causality. This is a particularly important point to keep in mind when discussing the yield curve. As a set of market-determined interest rates, the yield curve not only reflects market participants' views about the evolution of the economy but also their views about the FOMC's likely reaction to that evolution and uncertainty around these and other relevant factors. In other words, the yield curve represents not one signal, but several. The big question is, can we pull these signals apart to help appropriately inform the calibration of policy?
We can begin to make sense of this question by noting that Treasury yields of any given maturity can be thought of as the sum of two fundamental components:
- An expected policy rate path over that maturity: the market's best guess about the FOMC's rate path over time and in response to the evolution of the economy.
- A term premium: an adjustment (relative to the path of the policy rate) that reflects additional compensation investors receive for bearing risk related to holding longer-term bonds.
Among other things, this premium may be related to two factors: (1) uncertainty about how the economy will evolve over that maturity and how the FOMC might respond to events as they unfold and (2) the influence of supply and demand factors for U.S. Treasuries in a global market.
Let's apply this framework to the current yield curve. As several of my colleagues (including Fed governor Lael Brainard) have noted, the term premium is currently quite low. All else equal, this would result in lower long-term rates and a flatter yield curve. The term premium bears watching, but it is unclear that movements in the premium reflect particular concerns about the course of the economy.
I tend to focus on the other component: the expected path of policy. When we ask whether a flattening yield curve is a cause for concern, what we are really asking is: does the market expect an economic slowdown that will require the FOMC to reverse course and lower rates in the near future?
The eurodollar futures market shows us one measure of the market's expectation for the policy rate path. These derivative contracts are quoted in terms of a three-month rate that closely follows the FOMC's policy rate, which makes them well-suited for this kind of analysis. (Some technical details regarding this market can be found in a 2016 issue of the Atlanta Fed's "Notes from the Vault.")
Chart 2 illustrates the current estimate of the market's expected policy rate path. Read simply, the market appears to be forecasting continuing policy rate increases through 2020, and there is no evidence of a market forecast that the FOMC will need to reverse course in the medium term. However, the level of the policy rate is lower than the median of the FOMC's June Summary of Economic Projections (SEP) for 2019 and 2020.
Once we get past 2020, the market's expected policy path flattens. I read this as evidence that market participants overall expect a very gradual pace of tightening as the most likely outcome over the next two years. Interestingly, the market appears to expect a slower pace of tightening than the pace that at least some members of the FOMC currently view as "appropriate" as represented in their SEP submissions.
For this measure, I find the short-term perspective most informative. As one looks further into the future, the range of possible outcomes widens, as many the factors that influence the economy can evolve and interact widely. Thus, the precision of any signal the market is providing about policy expectations—if indeed there is any signal at all—is likely to be quite low.
With this information in mind, I do not interpret that the yield curve indicates that the market believes the evolution of the economy will cause the FOMC to lower rates in the foreseeable future. This interpretation is consistent with my own economic forecast, gleaned from macroeconomic data and a robust set of conversations with businesses both large and small. My modal outlook is for expansion to continue at an above-trend pace for the next several quarters, and I see the risks to that projection as balanced. Yes, there are downside risks, chief among them the effects of (and uncertainty about) trade policy. But those risks are countered by the potential for recent fiscal stimulus to have a much more transformative impact on the economy than I've marked into my baseline outlook.
I believe the yield curve gives us important and useful information about market participants' forecasts. But it is only one signal among many that we use for the complex task of forecasting growth in the U.S. economy. As the economy evolves, I will be assessing the response of the yield curve to incoming data and policy decisions along the lines I've laid out here, incorporating market signals along with a constellation of other information to achieve the FOMC's dual objectives of price stability and maximum employment.
April 02, 2018
Thoughts on a Long-Run Monetary Policy Framework, Part 4: Flexible Price-Level Targeting in the Big Picture
In the second post of this series, I enumerated several alternative monetary policy frameworks. Each is motivated by a recognition that the Federal Open Market Committee (FOMC) is likely to confront future scenarios where the effective lower bound on policy rates comes into play. Given such a possibility, it is important to consider the robustness of the framework.
My previous macroblog posts have focused on one of these frameworks: price-level targeting of a particular sort. As I hinted in the part 3 post, I view the specific framework I have in mind as a complement to, and not a substitute for, many of the other proposals that are likely to be considered. In this final post on the topic, I want to expand on that thought, considering in turn the options listed in part 2.
- Raising the FOMC's longer-run inflation target
The framework I described in part 3 was constructed to be consistent with the FOMC's current long-run objective of 2 percent inflation. But nothing in the structure of the plan I discussed would bind the Committee to the 2 percent objective. Obviously, a price-level target line can be constructed for any path that policymakers choose. The key is to have such a target and coherently manage monetary policy so that it achieves that target. The slope of the price-level path—that is, the underlying long-run inflation rate—is an entirely separate issue.
- Maintaining the 2 percent longer-run inflation target and policy framework more or less as is, relying on unconventional tools when needed
As noted, the flexible price-level targeting example I discussed in part 3 was constructed with a long-run 2 percent inflation rate as the key benchmark. In that regard, it is clearly consistent with the Fed's current inflation goal.
Further, a central question in the current framework is how to interpret a goal of 2 percent inflation in the longer run. One interpretation is that the central bank aims to deliver an inflation rate that averages 2 percent over some period of time. Another interpretation is that the central bank aims to deliver an inflation rate that tends toward 2 percent, letting bygones be bygones in the event that realized inflation rates deviate from 2 percent.
The bounded price-level targets I have presented do not force a particular answer to the question I raise, and both views can be supported within the framework. Hence, the framework is consistent with whichever view the FOMC might adopt. The only caveat is that deviations from 2 percent cannot be so large and persistent that they push the price level outside the target bounds.As to the problem of the federal funds rate falling to a level that makes further cuts infeasible, nothing in the notion of a price-level target rules out (or demands) any particular policy tool. If anything, bounded price-level targets could expand the existing toolkit. They certainly do not constrain it.
Targeting nominal gross domestic product (GDP) growth
Targeting nominal GDP growth, which is the sum of real GDP growth and the inflation rate, represents a deviation from the price-level targeting I have described. In this framework, the longer-run rate of inflation depends on the longer-run rate of real GDP growth.
To see how this works, consider the period from 2003 to 2013. In 2003, the Congressional Budget Office projected an average annual potential GDP growth rate of 2.9 percent over the next 10 years. Had there been a nominal GDP growth target of 5 percent at this time, the implicit annualized inflation target would have been just over 2 percent. However, current CBO estimates indicate that actual potential GDP growth over this period averaged just 1.5 percent, which would suggest an inflation target of 3.5 percent. As data came in and policymakers saw this lower level of growth, they would have responded by shifting upward the implicit inflation target.
For advocates of using a nominal GDP target, shifting inflation targets is a key feature and not a bug, as it allows policy to adjust in real time to unforeseen cyclical and structural developments. What nominal GDP targeting doesn't satisfy is the principle of bounded nominal uncertainty. Eventually, price-level bounds that are set with an assumed potential real growth path will be violated if shifts in potential growth are sufficiently large. The appeal of nominal GDP targeting depends on how one weighs the benefits of inflation-target flexibility against the costs of price-level uncertainty inherent in that framework.
- Adopting flexible inflation targets that are adjusted based on economic conditions
Recently, my colleague Eric Rosengren, president of the Boston Fed, offered a proposal (here and here) that has some of the flavor of nominal GDP targeting but differs in important respects. Like nominal GDP targeting, President Rosengren's framework would adjust the target inflation rate given structural shifts in the economy. However, if I understand his idea correctly, the FOMC would deliberate specifically on the desired rate of inflation and adjust the target within a predetermined range.
Relying on the target's appropriate range opens the possibility of compatibility between President Rosengren's framework and the one I presented. Policymakers could use price-level targeting concepts in developing a range of policy options given the state of the economy. The breadth of the range of options would depend on the bounds the FOMC felt represented an acceptable degree of price-level uncertainty.
Summing all of this up, then—to me, the important characteristic of a sound monetary policy framework is that it provides a credible nominal anchor while maintaining flexibility to address changing circumstances. I think some form of flexible price-level targeting can be a part of such a framework. I look forward to a robust and constructive debate.
March 28, 2018
Thoughts on a Long-Run Monetary Policy Framework, Part 3: An Example of Flexible Price-Level Targeting
I want to start my discussion in this post with two points I made in the previous two macroblog posts (here and here). First, I think a commitment to delivering a relatively predictable price-level path is a desirable feature of a well-constructed monetary framework. Price stability is in my view achieved if people can have confidence that the purchasing power of the dollars they hold today will fall within a certain range at any date in the future.
My second point was that, as a matter of fact, the Federal Open Market Committee (FOMC) delivered on this definition of price stability during the years 1995–2012. (The FOMC formally adopted its 2 percent long-run inflation target in 2012.)
If you are reading this blog, you're almost certainly aware that since 2012, the actual personal consumption expenditures (PCE) inflation rate has persistently fallen short of the 2 percent goal. That, of course, means that the price level has fallen increasingly short of a reference 2 percent path, as shown in chart 1 below.
Is this deviation from the price-level path a problem? The practical answer to that question will depend on how my proposed definition of price stability is implemented.
By way of example, let's suppose that the FOMC commits to conducting monetary policy in such a way that the price level will always fall within plus-or-minus 5 percent of the long-run target path (which itself we define as the path implied by a constant 2 percent inflation rate). This policy—and how it relates to the actual path of PCE price inflation—is illustrated in chart 2.
So would inflation falling short of the 2 percent longer-run goal be a problem if the Fed was operating within the framework depicted in chart 2? In a sense, the answer is no. The current price level would be within the bounds of a hypothetical commitment made in 1995. If the central bank could perpetually deliver 2 percent annual inflation, that promise would remain intact, as shown in chart 3.
Of course, chart 3 depicts a forward path for prices whose margin for error is quite slim. Continued inflation below 2 percent would, in short order, push the price level below the lower bound, likely requiring a relatively accommodative monetary policy stance—that is, if policymakers sought to satisfy a commitment to this framework's definition of price stability.
Central bankers in risk management mode might opt for policies designed to deliberately move the price level toward the 2 percent average inflation midpoint in cases where the price level moves too close for the Committee's comfort to one of the bounds (as, perhaps, in chart 3). It bears noting that in such cases there are a wide range of options available to policymakers with respect to the timing and pace of that adjustment.
This scenario illustrates the flexibility of the price-level targeting framework I'm describing. I think it's important to think in terms of gradual adjustments that don't risk whipsawing the economy or force the central bank to be overly precise in its short-run influence on inflation and economic activity. A key feature of such a policy framework includes considerable short- and medium-run flexibility in inflation outcomes.
But the other key feature is that the framework limits that same flexibility—that is, it satisfies the principle of bounded nominal uncertainty. Suppose you and another person agree that you will receive a $1 payment in 10 years in exchange for a service provided today. If the inflation rate over this 10-year period is exactly 2 percent per year, then the real value of that dollar in goods and services would be 82 cents.
In my example (the one with a plus-or-minus 5 percent bound on the price level), monetary policymakers have essentially committed that the agreed-upon payment would not result in real purchasing power of less than 78 cents (and the payer could be confident that the real purchasing power relinquished would not be more than 86 cents).
The crux of my argument is that a "good" monetary policy framework limits the degree of uncertainty associated with contracts involving transfers of dollars over time. In limiting uncertainty, monetary policy contributes to economic efficiency.
The 5 percent bound I chose for my illustration is obviously arbitrary. The magnitude of the acceptable deviations from the price-level path would be a policy decision. I'm not sure we know a whole lot about what range of deviations from an expected price path contributes most consistently to economic efficiency. A benefit of the framework I am describing is that it would focus research, discussion, and debate squarely on that question.
This series of posts is going on hiatus for a few days. Tomorrow, the Atlanta Fed is going to release its 2017 Annual Report, and I certainly don't want to steal its thunder. And Friday, of course, will begin the Easter weekend for many people.
But I want to conclude this post by emphasizing that the framework I am describing is more of a refinement of, and not a competitor to, many of the framework proposals I discussed in Monday's post. This is an important point and one that I will turn to in the final installment of this series, to be published next Monday.
March 27, 2018
Thoughts on a Long-Run Monetary Policy Framework, Part 2: The Principle of Bounded Nominal Uncertainty
In yesterday's macroblog post, I discussed one of the central monetary policy questions of the day: Is the possibility of hitting the lower bound on policy rates likely to be an issue for the Fed going forward, do we care, and—if we do—what can we do about it?
The answers to the first questions are, in my opinion, yes and yes. That's the easy part. The last question—what can we do about it?—is the hard part. In the end, this is a question about the framework for conducting monetary policy. The menu of options includes:
- Raising the Federal Open Market Committee's (FOMC) longer-run inflation target;
- Maintaining the current policy framework, including the 2 percent longer-run inflation target, relying on unconventional tools when needed;
- Targeting the growth rate of nominal gross domestic product;
- Adopting an inflation range with flexible inflation targets that are adjusted based on the state of the economy (a relatively recent entry to the list suggested by Boston Fed president Eric Rosengren );
- Price-level targeting.
Chicago Fed president Charles Evans, San Francisco Fed president John Williams, and former Federal Reserve chairman Ben Bernanke, among others, have advocated for some version of the last item on this list of options. I am going to add myself to the list of people sympathetic to a policy framework that has a form of price-level targeting at its center.
I'll explain my sympathies by discussing principles that are central to my thinking.
First, I think the Fed's commitment to the long-run 2 percent inflation objective has served the country well. I recognize that the word “commitment” in that sentence might be more important than the specific 2 percent target value. But credibility and commitment imply objectives that, though not immutable, rarely change—and then only with a clear consensus on a better course. With respect to changing the 2 percent objective as a longer-run goal, my feet are not set in concrete, but they are in pretty thick mud.
Second, former Fed chairman Alan Greenspan offered a well-known definition of what it means for a central bank to succeed on a charge to deliver price stability. Paraphrasing, Chairman Greenspan suggested that the goal of price stability is met when households and business ignore inflation when making key economic decisions that affect their financial futures.
I agree with the Greenspan definition, and I believe that the 2 percent inflation objective has helped us meet that criterion. But I don't think we have met the Greenspan definition of price stability solely because 2 percent is a sufficiently low rate of inflation. I think it is also critical that deviations of prices away from a path implied by an average inflation rate of 2 percent have, in the United States, been relatively small.
Here's how I see it: until recently, the 2 percent inflation objective in the United States has essentially functioned as a price-level target centered on a 2 percent growth path. The orange line in the chart below shows what a price-level path of 2 percent growth would have been over the period from 1995 to 2012. I chose to begin with 1995 because it arguably began the Fed's era of inflation targeting. Why does the chart end in 2012? I'll get to that tomorrow, when I lay out a specific hypothetical plan.
The green line in the chart is the actual path of the price level, as measured by the price index for personal consumption expenditures. The chart explains what I mean when I say the FOMC effectively delivered on a 2 percent price-level target. Over the period depicted in this chart, the price level did not deviate much from the 2 percent path.
I believe the inflation outcome apparent in the chart is highly desirable. Why? Because the resulting price-level path satisfies what I will call the “principle of bounded nominal uncertainty.” In essence, the principle of bounded nominal uncertainty means that if you save a dollar today you can be “reasonably confident” about what the real value of that saving will be in the future.
For example, suppose that in January 1995 you had socked away $1 in cash that you intended to spend exactly five years later. If you believed that the Fed was going to deliver an average annual inflation rate of 2 percent over this period, you'd expect that dollar to be worth about 90 cents in real purchasing power by January 2000. (Recall that cash depreciates at the rate of inflation—I didn't say this was the best way to save!)
In fact, because the price level's realized path over that time hewed very closely to the expected 2 percent growth path, the actual value of the dollar you saved would have been very close to the 90 cents you expected. And this, I think, epitomizes a reasonable definition of price stability. If you and I enter into a contract to exchange a dollar at some future date, we can confidently predict within some range that dollar's purchasing power. Good monetary policy, in my view, will satisfy the principle of bounded nominal uncertainty.
This is the starting point of my thinking about a useful monetary policy framework—and how I think about price-level targeting generally. Tomorrow, I will expand on this thought and offer a specific example of how a price-level target might be put into operation in a way that is both flexible and respectful of the principle of bounded nominal uncertainty.
March 26, 2018
Thoughts on a Long-Run Monetary Policy Framework: Framing the Question
"Should the Fed stick with the 2 percent inflation target or rethink it?" This was the very good question posed in a special conference hosted by the Brookings Institution this past January. Over the course of roughly two decades prior to the global financial crisis, a consensus had formed among monetary-policy experts and practitioners the world over that something like 2 percent is an appropriate goal—maybe even the optimal goal—for central banks to pursue. So why reconsider that target now?
The answer to that question starts with another consensus that has emerged in the aftermath of the global financial crisis. In particular, there is now a widespread belief that, once monetary policy has fully normalized, the federal funds rate—the Federal Open Market Committee's (FOMC) reference policy rate—will settle significantly below historical norms.
Several of my colleagues have spoken cogently about this phenomenon, which is often cast in terms of concepts like r-star, the natural rate of interest, the equilibrium rate of interest, or (in the case of my colleague Jim Bullard ), r-dagger. I like to think in terms of the "neutral" rate of interest; that is, the level of the policy rate consistent with the FOMC meeting its longer-run goals of price stability and maximum sustainable growth. In other words, the level of the federal funds rate should be consistent with 2 percent inflation, the unemployment rate at its sustainable level, and real gross domestic product at its potential.
Estimates of the neutral policy rate are subject to imprecision and debate. But a reasonable notion can be gleaned from the range of projections for the long-run federal funds rate reported in the Summary of Economic Projections (SEP) released just after last week's FOMC meeting. According to the latest SEP, neutral would be in a range 2.3 to 3.0 percent.
For some historical context, in the latter half of the 1990s, as the 2 percent inflation consensus was solidifying, the neutral federal funds rate would have been pegged in a range of something like 4.0 to 5.0 percent, roughly 2 percentage points higher than the range considered to be neutral today.
The implication for monetary policy is clear. If interest rates settle at levels that are historically low, policymakers will have limited scope for cutting rates in the event of a significant economic downturn (or at least more limited scope than they had in the past). I think it's fair to say that even relatively modest downturns are likely to yield policy reactions that drive the federal funds rate to zero, as happened in the Great Recession.
My view is that the nontraditional tools deployed after December 2008, when the federal funds rate effectively fell to zero, were effective. But it is accurate to say that our experience with these tools is limited, and the effectiveness of those tools remains controversial. I join the opinion that, all else equal, it would be vastly preferable to conduct monetary policy through the time-tested approach of raising and lowering short-term policy rates, if such an approach is available.
This point is where the challenge to the 2 percent inflation target enters the picture. The neutral rate I have been describing is a nominal rate. It is roughly the sum of an inflation-adjusted real rate—determined by fundamental saving and investment decisions in the global economy—and the rate of inflation. The downward drift in the neutral rate I have been describing is attributable to a downward drift in the inflation-adjusted real rate. A great deal of research has documented this phenomenon, such as some influential research by San Francisco Fed president John Williams and Thomas Laubach, the head of the monetary division at the Fed's Board of Governors.
In the long run, a central bank cannot reliably control the real rate of interest. So if we accept the following premises...
- A neutral rate that is too low to give the central bank enough room to fight even run-of-the-mill downturns is problematic;
- Cutting rates is the optimal strategy for addressing downturns; and
- The real interest rate is beyond the control of the central bank in the long run
...then we must necessarily accept that raising the neutral rate, thus affording monetary policymakers the desired rate-cutting scope when needed, would require raising the long-run inflation rate. Hence the argument for rethinking the Fed's 2 percent inflation target.
But is that the only option? And is it the best option?
The answer to the first question is clearly no. The purpose of the Brookings Institution sessions is addressing the pros and cons of the different strategies for dealing with the low neutral rate problem, and I commend them to you. But in upcoming macroblog posts, I want to share some of my thoughts on the second question.
Tomorrow, I will review some of the proposed options and explain why I am attracted to one in particular: price-level targeting. On Wednesday, I will propose what I think is a potentially useful model for implementing a price-level targeting scheme in practice. I want to emphasize that these are preliminary thoughts, offered in the spirit of stimulating the conversation and debate. I welcome that conversation and debate and look forward to making my contribution to moving it forward.
September 08, 2017
When Health Insurance and Its Financial Cushion Disappear
Personal health care costs can skyrocket with a new diagnosis or accident, often leading to catastrophic financial costs for people. Health insurance plays an important role in protecting individuals from unexpected large financial shocks as a result of adverse health events. Just as homeowner's insurance helps protect you from financial devastation if your house burns down, health insurance helps protects you from burning through your savings because of a heart attack. This 2008 report from the Commonwealth Fund shows that the uninsured are far more likely to have to use their savings and reduce other types of spending to pay medical bills.
Much research has been done on the impact of health insurance on financial and health outcomes. (This paper , for example, summarizes the history and impact of Medicaid.) However, most of the studies look at the case of individuals who are gaining health insurance. In a recent Atlanta Fed working paper and the related podcast episode , we measure the impact of losing public health insurance on measures of financial well-being such as credit scores, delinquent debt eligible to be sent to debt collectors, and bankruptcies. We performed these measurements by studying the case of Tennessee's Medicaid program, known as TennCare, in the mid-2000s. At that time, a large statewide Medicaid expansion that began in the 1990s ran into financial difficulties and was scaled back. As the following chart shows, some 170,000 individuals were removed from TennCare rolls between 2005 and 2006.
Our analysis of this episode, using data from the New York Fed's Consumer Credit Panel/Equifax, revealed some striking findings. Individuals who lost health insurance experienced lower credit scores, more debt eligible to be sent to collections, and a higher incidence of bankruptcy. Those who were already financially vulnerable suffered the worst. In particular, individuals who already had poor credit, as measured by Fannie Mae's lowest creditworthiness categories , and then lost Medicaid see their credit scores fall by close to 40 points on average and are almost 17 percent more likely to have their debt sent to collection agencies. Our analysis also finds that gaining or losing health insurance is not symmetric in its impact—losing insurance has larger negative financial effects than the positive financial impacts of gaining insurance.
Our results provide evidence that losing Medicaid coverage not only removes inexpensive access to health care but also eliminates an important layer of financial protection. A cost-benefit analysis of proposed cuts to Medicaid coverage (see here, here, and here for a discussion of recent legislative efforts in the U.S. Congress) would need to consider the negative financial consequences for individuals of the type that we have identified.
September 07, 2017
What Is the "Right" Policy Rate?
What is the right monetary policy rate? The Cleveland Fed, via Michael Derby in the Wall Street Journal, provides one answer—or rather, one set of answers:
The various flavors of monetary policy rules now out there offer formulas that suggest an ideal setting for policy based on economic variables. The best known of these is the Taylor Rule, named for Stanford University's John Taylor, its author. Economists have produced numerous variations on the Taylor Rule that don't always offer a similar story...
There is no agreement in the research literature on a single "best" rule, and different rules can sometimes generate very different values for the federal funds rate, both for the present and for the future, the Cleveland Fed said. Looking across multiple economic forecasts helps to capture some of the uncertainty surrounding the economic outlook and, by extension, monetary policy prospects.
Agreed, and this is the philosophy behind both the Cleveland Fed's calculations based on Seven Simple Monetary Policy Rules and our own Taylor Rule Utility. These two tools complement one another nicely: Cleveland's version emphasizes forecasts for the federal funds rate over different rules and Atlanta's utility focuses on the current setting of the rate over a (different, but overlapping) set of rules for a variety of the key variables that appear in the Taylor Rule (namely, the resource gap, the inflation gap, and the "neutral" policy rate). We update the Taylor Rule Utility twice a month after Consumer Price Index and Personal Income and Outlays reports and use a variety of survey- and model-based nowcasts to fill in yet-to-be released source data for the latest quarter.
We're introducing an enhancement to our Taylor Rule utility page, a "heatmap" that allows the construction of a color-coded view of Taylor Rule prescriptions (relative to a selected benchmark) for five different measures of the resource gap and five different measures of the neutral policy rate. We find the heatmap is a useful way to quickly compare the actual fed funds rate with current prescriptions for the rate from a relatively large number of rules.
In constructing the heatmap, users have options on measuring the inflation gap and setting the value of the "smoothing parameter" in the policy rule, as well establishing the weight placed on the resource gap and the benchmark against which the policy rule is compared. (The inflation gap is the difference between actual inflation and the Federal Open Market Committee's 2 percent longer-term objective. The smoothing parameter is the degree to which the rule is inertial, meaning that it puts weight on maintaining the fed funds rate at its previous value.)
For example, assume we (a) measure inflation using the four-quarter change in the core personal consumption expenditures price index; (b) put a weight of 1 on the resource gap (that is, specify the rule so that a percentage point change in the resource gap implies a 1 percentage point change in the rule's prescribed rate); and (c) specify that the policy rule is not inertial (that is, it places no weight on last period's policy rate). Below is the heatmap corresponding to this policy rule specification, comparing the rules prescription to the current midpoint of the fed funds rate target range:
We should note that all of the terms in the heatmap are described in detail in the "Overview of Data" and "Detailed Description of Data" tabs on the Taylor Rule Utility page. In short, U-3 (the standard unemployment rate) and U-6 are measures of labor underutilization defined here. We introduced ZPOP, the utilization-to-population ratio, in this macroblog post. "Emp-Pop" is the employment-population ratio. The natural (real) interest rate is denoted by r*. The abbreviations for the last three row labels denote estimates of r* from Kathryn Holston, Thomas Laubach, and John C. Williams, Thomas Laubach and John C. Williams, and Thomas Lubik and Christian Matthes.
The color coding (described on the webpage) should be somewhat intuitive. Shades of red mean the midpoint of the current policy rate range is at least 25 basis points above the rule prescription, shades of green mean that the midpoint is more than 25 basis points below the prescription, and shades of white mean the midpoint is within 25 basis points of the rule.
The heatmap above has "variations on the Taylor Rule that don't always offer a similar story" because the colors range from a shade of red to shades of green. But certain themes do emerge. If, for example, you believe that the neutral real rate of interest is quite low (the Laubach-Williams and Lubik-Mathes estimates in the bottom two rows are −0.22 and −0.06) your belief about the magnitude of the resource gap would be critical to determining whether this particular rule suggests that the policy rate is already too high, has a bit more room to increase, or is just about right. On the other hand, if you are an adherent of the original Taylor Rule and its assumption that a long-run neutral rate of 2 percent (the top row of the chart) is the right way to think about policy, there isn't much ambiguity to the conclusion that the current rate is well below what the rule indicates.
"[D]ifferent rules can sometimes generate very different values for the federal funds rate, both for the present and for the future." Indeed.
April 19, 2017
The Fed’s Inflation Goal: What Does the Public Know?
The Federal Open Market Committee (FOMC) has had an explicit inflation target of 2 percent since January 25, 2012. In its statement announcing the target, the FOMC said, "Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee's ability to promote maximum employment in the face of significant economic disturbances."
If communicating this goal to the public enhances the effectiveness of monetary policy, one natural question is whether the public is aware of this 2 percent target. We've posed this question a few times to our Business Inflation Expectations Panel, which is a set of roughly 450 private, nonfarm firms in the Southeast. These firms range in size from large corporations to owner operators.
Last week, we asked them again. Specifically, the question is:
What annual rate of inflation do you think the Federal Reserve is aiming for over the long run?
Unsurprisingly, to us at least—and maybe to you if you're a regular macroblog reader—the typical respondent answered 2 percent (the same answer our panel gave us in 2015 and back in 2011). At a minimum, southeastern firms appear to have gotten and retained the message.
So, why the blog post? Careful Fed watchers noticed the inclusion of a modifier to describe the 2 percent objective in the March 2017 FOMC statement (emphasis added): "The Committee will carefully monitor actual and expected inflation developments relative to its symmetric inflation goal." And especially eagle-eyed Fed watchers will remember that the Committee amended its statement of longer-run goals in January 2016, clarifying that its inflation objective is indeed symmetric.
The idea behind a symmetric inflation target is that the central bank views both overshooting and falling short of the 2 percent target as equally bad. As then Minneapolis Fed President Kocherlakota stated in 2014, "Without symmetry, inflation might spend considerably more time below 2 percent than above 2 percent. Inflation persistently below the 2 percent target could create doubts in households and businesses about whether the FOMC is truly aiming for 2 percent inflation, or some lower number."
Do such doubts actually exist? In a follow-up to our question about the numerical target, in the latest survey we asked our panel whether they thought the Fed was more, less, or equally likely to tolerate inflation below or above its targe. The following chart depicts the responses.
One in five respondents believes the Federal Reserve is more likely to accept inflation above its target, while nearly 40 percent believe it is more likely to accept inflation below its target. Twenty-five percent of firms believe the Federal Reserve is equally likely to accept inflation above or below its target. The remainder of respondents were unsure. This pattern was similar across firm sizes and industries.
In other words, more firms see the inflation target as a threshold (or ceiling) that the Fed is averse to crossing than see it as a symmetric target.
Lately, various Committee members (here, here, and in Chair Yellen's latest press conference at the 42-minute mark) have discussed the symmetry about the Committee's inflation target. Our evidence suggests that the message may not have quite sunk in yet.
March 30, 2017
Bad Debt Is Bad for Your Health
The amount of debt held by U.S. households grew steadily during the 2000s, with some leveling off after the recession. However, the level of debt remains elevated relative to the turn of the century, a fact easily seen by examining changes in debt held by individuals from 2000 to 2015 (the blue line in the chart below).
Not only is the amount of debt elevated for U.S. households, but the proportion of delinquent household debt has also fluctuated significantly, as the red line in the above chart depicts.
The amount of debt that is severely delinquent (90 days or more past due) peaked during the last recession and remains above prerecession levels. The Federal Reserve Bank of New York reports these measures of financial health quarterly.
In a recent working paper, we demonstrate a potential causal link between these fluctuations in delinquency and mortality. (A recent Atlanta Fed podcast episode also discussed our findings.) By isolating unanticipated variations in debt and delinquency not caused by worsening health, we show that carrying debt—and delinquent debt in particular—has an adverse effect on mortality rates.
Our results suggest that the decline in the quality of debt portfolios during the Great Recession was associated with an additional 5.7 deaths per 100,000 people, or just over 12,000 additional deaths each year during the worst part of the recession (a calculation based on census population estimates found here). To put this rate in perspective, in 2014 the death rate from homicides was 5.0 per 100,000 people, and motor vehicle accidents caused 10.7 deaths per 100,000 people.
It is well understood that an individual experiencing a large and unexpected decline in health can encounter financial difficulties, and that this sort of event is a major cause of personal bankruptcy. Our findings suggest that significant unexpected financial problems can themselves lead to worse health outcomes. This link between delinquent debt and health outcomes provides more reason for public policy discussions to take seriously the nexus between financial well-being and public health.
December 16, 2016
The Impact of Extraordinary Policy on Interest and Foreign Exchange Rates
Central banks in the developed countries have adopted a variety of extraordinary measures since the financial crisis, including large-scale asset purchases and very low (and in some cases negative) policy rates in an effort to boost economic activity. The Atlanta Fed recently hosted a workshop titled "The Impact of Extraordinary Monetary Policy on the Financial Sector," which discussed these measures. This macroblog post discusses the highlights of three papers related to the impact of such policy on interest rates and foreign exchange rates. A companion Notes from the Vault reviews papers that examined how those policies may have affected financial institutions, including their lending.
Prior to the crisis, central banks targeted short-term interest rates as a way of influencing the rest of the yield curve, which in turn affected aggregate demand. However, as short-term rates approached zero, central banks' ability to further cut their target rate diminished. As a substitute, the central banks of many developed countries (including the Federal Reserve, the European Central Bank, and the Bank of Japan) began to undertake large-scale purchases of bonds in an attempt to influence longer-term rates.
Central bank asset purchases appear to have had some beneficial effect, but exactly how these purchases influenced rates has remained an open question. One of the leading hypotheses is that the purchases did not have any direct effect, but rather served as a signal that the central bank was committed to maintaining very low short-term rates for an extended period. A second hypothesis is that central bank purchases of longer-dated obligations resulted in long-term investors bidding up the price of remaining longer-maturity government and private debt.
The second hypothesis was tested in a paper by Federal Reserve Board economists Jeffrey Huther, Jane Ihrig, Elizabeth Klee, Alexander Boote and Richard Sambasivam. Their starting point was the view that a "neutral" policy would have the Fed's System Open Market Account (SOMA) closely match the distribution of the stock of outstanding Treasury securities. In their statistical tests, they find support for the hypothesis that deviations from this neutrality should influence market rates. In particular, they find that the term premium in longer-term rates declines significantly as the duration of the SOMA portfolio grows relative to that of the stock of outstanding Treasury debt.
The central banks' large-scale asset purchases not only took longer-dated assets out of the economy, but they also forced banks to increase their holdings of reserves. Large central banks now pay interest on reserves (or in some cases charge interest on reserve holdings) at an overnight rate that the central bank can change at any time. As a result, these purchases can significantly reduce the average duration (or maturity) of a bank's portfolio below what the banks found optimal given the term structure that existed prior to the purchases. Jens H. E. Christensen from the Federal Reserve Bank of San Francisco and Signe Krogstrup from the International Monetary Fund have a paper in which they hypothesize that banks respond to this shortening of duration by bidding up the price of longer-dated securities (thereby reducing their yield) to restore optimality.
The difficulty with testing Christensen and Krogstrup's hypothesis is that in most cases central banks were expanding bank reserves by buying longer-dated securities, thus making it difficult to disentangle their respective effects. However, in 2011 the Swiss National Bank undertook a series of three policy moves designed to produce a large, rapid increase in bank reserves. Importantly, these moves were an attempt to counter perceived overvaluation of the Swiss franc and did not involve the purchase of longer-dated bonds. In a follow-up empirical paper , Christensen and Krogstrup exploit this unique policy setting to test whether Swiss bond rates declined in response to the increase in reserves. They find that the third and largest of these increases in reserves was associated with a statistically and economically significant fall in term premia, implying that the increase did lower longer-term rates.
Although developed countries' monetary policy has focused on their domestic economies, these policies can have significant spillovers into emerging countries. Large changes in the rates of return available in developed countries can lead investors to shift funds into and out of emerging countries, causing potentially undesirable large swings in the foreign exchange rate of these emerging countries. Developing countries' central banks may try to counteract these swings via intervention in the foreign exchange market, but the effectiveness of sterilized intervention is the subject of some debate. (Sterilized intervention occurs when the central bank buys or sells foreign currency, but then takes offsetting measures to prevent these from changing bank reserves.)
Once again, determining whether exchange rates are influenced and, if so, by what mechanism can be econometrically difficult. Marcos Chamon from the International Monetary Fund, Márcio Garcia from PUC-Rio, and Laura Souza from Itaú Unibanco examine the efforts of the Brazilian Central Bank to stabilize the Brazilian real in the aftermath of the so-called "taper tantrum." The taper tantrum is the name given to the sharp jump in U.S. bond yields and the foreign exchange rate value of the U.S. dollar after the May 23, 2013, statement by Board Chair Ben Bernanke that the Federal Reserve would slow (or taper) the rate at which it was purchasing Treasury bonds (see a brief essay by Christopher J. Neely). Chamon, Garcia, and Souza's paper takes advantage of the fact that Brazil preannounced its intervention policy, which allows them to separate the impact of the announcement to intervene from the intervention itself. They find that the Brazilian Central Bank's intervention was effective in strengthening the value of the real relative to a basket of comparable currencies.
All three of the studies faced the difficult challenge in linking specific central bank actions to policy outcomes, and each tackled the challenge in innovative ways. The evidence provided by the studies suggests that central banks can use extraordinary policies to influence interest and foreign exchange rates.
- Demographically Adjusting the Wage Growth Tracker
- What Does the Current Slope of the Yield Curve Tell Us?
- Does Loyalty Pay Off?
- Immigration and Hispanics' Educational Attainment
- Are Tariff Worries Cutting into Business Investment?
- Improving Labor Market Fortunes for Workers with the Least Schooling
- Part-Time Workers Are Less Likely to Get a Pay Raise
- Learning about an ML-Driven Economy
- Hitting a Cyclical High: The Wage Growth Premium from Changing Jobs
- Thoughts on a Long-Run Monetary Policy Framework, Part 4: Flexible Price-Level Targeting in the Big Picture
- October 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- November 2017
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Economic conditions
- Economic Growth and Development
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Health Care
- Inflation Expectations
- Interest Rates
- Labor Markets
- Latin America/South America
- Monetary Policy
- Money Markets
- Real Estate
- Saving, Capital, and Investment
- Small Business
- Social Security
- This, That, and the Other
- Trade Deficit
- Wage Growth