About


The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.


September 08, 2017


When Health Insurance and Its Financial Cushion Disappear

Personal health care costs can skyrocket with a new diagnosis or accident, often leading to catastrophic financial costs for people. Health insurance plays an important role in protecting individuals from unexpected large financial shocks as a result of adverse health events. Just as homeowner's insurance helps protect you from financial devastation if your house burns down, health insurance helps protects you from burning through your savings because of a heart attack. This 2008 report from the Commonwealth Fund shows that the uninsured are far more likely to have to use their savings and reduce other types of spending to pay medical bills.

Much research has been done on the impact of health insurance on financial and health outcomes. (This paper , for example, summarizes the history and impact of Medicaid.) However, most of the studies look at the case of individuals who are gaining health insurance. In a recent Atlanta Fed working paper and the related podcast episode , we measure the impact of losing public health insurance on measures of financial well-being such as credit scores, delinquent debt eligible to be sent to debt collectors, and bankruptcies. We performed these measurements by studying the case of Tennessee's Medicaid program, known as TennCare, in the mid-2000s. At that time, a large statewide Medicaid expansion that began in the 1990s ran into financial difficulties and was scaled back. As the following chart shows, some 170,000 individuals were removed from TennCare rolls between 2005 and 2006.

Our analysis of this episode, using data from the New York Fed's Consumer Credit Panel/Equifax, revealed some striking findings. Individuals who lost health insurance experienced lower credit scores, more debt eligible to be sent to collections, and a higher incidence of bankruptcy. Those who were already financially vulnerable suffered the worst. In particular, individuals who already had poor credit, as measured by Fannie Mae's lowest creditworthiness categories , and then lost Medicaid see their credit scores fall by close to 40 points on average and are almost 17 percent more likely to have their debt sent to collection agencies. Our analysis also finds that gaining or losing health insurance is not symmetric in its impact—losing insurance has larger negative financial effects than the positive financial impacts of gaining insurance.

Our results provide evidence that losing Medicaid coverage not only removes inexpensive access to health care but also eliminates an important layer of financial protection. A cost-benefit analysis of proposed cuts to Medicaid coverage (see here, here, and here for a discussion of recent legislative efforts in the U.S. Congress) would need to consider the negative financial consequences for individuals of the type that we have identified.

September 8, 2017 in Economic conditions, Monetary Policy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 07, 2017


What Is the "Right" Policy Rate?

What is the right monetary policy rate? The Cleveland Fed, via Michael Derby in the Wall Street Journal, provides one answer—or rather, one set of answers:

The various flavors of monetary policy rules now out there offer formulas that suggest an ideal setting for policy based on economic variables. The best known of these is the Taylor Rule, named for Stanford University's John Taylor, its author. Economists have produced numerous variations on the Taylor Rule that don't always offer a similar story...

There is no agreement in the research literature on a single "best" rule, and different rules can sometimes generate very different values for the federal funds rate, both for the present and for the future, the Cleveland Fed said. Looking across multiple economic forecasts helps to capture some of the uncertainty surrounding the economic outlook and, by extension, monetary policy prospects.

Agreed, and this is the philosophy behind both the Cleveland Fed's calculations based on Seven Simple Monetary Policy Rules and our own Taylor Rule Utility. These two tools complement one another nicely: Cleveland's version emphasizes forecasts for the federal funds rate over different rules and Atlanta's utility focuses on the current setting of the rate over a (different, but overlapping) set of rules for a variety of the key variables that appear in the Taylor Rule (namely, the resource gap, the inflation gap, and the "neutral" policy rate). We update the Taylor Rule Utility twice a month after Consumer Price Index and Personal Income and Outlays reports and use a variety of survey- and model-based nowcasts to fill in yet-to-be released source data for the latest quarter.

We're introducing an enhancement to our Taylor Rule utility page, a "heatmap" that allows the construction of a color-coded view of Taylor Rule prescriptions (relative to a selected benchmark) for five different measures of the resource gap and five different measures of the neutral policy rate. We find the heatmap is a useful way to quickly compare the actual fed funds rate with current prescriptions for the rate from a relatively large number of rules.

In constructing the heatmap, users have options on measuring the inflation gap and setting the value of the "smoothing parameter" in the policy rule, as well establishing the weight placed on the resource gap and the benchmark against which the policy rule is compared. (The inflation gap is the difference between actual inflation and the Federal Open Market Committee's 2 percent longer-term objective. The smoothing parameter is the degree to which the rule is inertial, meaning that it puts weight on maintaining the fed funds rate at its previous value.)

For example, assume we (a) measure inflation using the four-quarter change in the core personal consumption expenditures price index; (b) put a weight of 1 on the resource gap (that is, specify the rule so that a percentage point change in the resource gap implies a 1 percentage point change in the rule's prescribed rate); and (c) specify that the policy rule is not inertial (that is, it places no weight on last period's policy rate). Below is the heatmap corresponding to this policy rule specification, comparing the rules prescription to the current midpoint of the fed funds rate target range:

We should note that all of the terms in the heatmap are described in detail in the "Overview of Data" and "Detailed Description of Data" tabs on the Taylor Rule Utility page. In short, U-3 (the standard unemployment rate) and U-6 are measures of labor underutilization defined here. We introduced ZPOP, the utilization-to-population ratio, in this macroblog post. "Emp-Pop" is the employment-population ratio. The natural (real) interest rate is denoted by r*. The abbreviations for the last three row labels denote estimates of r* from Kathryn Holston, Thomas Laubach, and John C. Williams, Thomas Laubach and John C. Williams, and Thomas Lubik and Christian Matthes.

The color coding (described on the webpage) should be somewhat intuitive. Shades of red mean the midpoint of the current policy rate range is at least 25 basis points above the rule prescription, shades of green mean that the midpoint is more than 25 basis points below the prescription, and shades of white mean the midpoint is within 25 basis points of the rule.

The heatmap above has "variations on the Taylor Rule that don't always offer a similar story" because the colors range from a shade of red to shades of green. But certain themes do emerge. If, for example, you believe that the neutral real rate of interest is quite low (the Laubach-Williams and Lubik-Mathes estimates in the bottom two rows are −0.22 and −0.06) your belief about the magnitude of the resource gap would be critical to determining whether this particular rule suggests that the policy rate is already too high, has a bit more room to increase, or is just about right. On the other hand, if you are an adherent of the original Taylor Rule and its assumption that a long-run neutral rate of 2 percent (the top row of the chart) is the right way to think about policy, there isn't much ambiguity to the conclusion that the current rate is well below what the rule indicates.

"[D]ifferent rules can sometimes generate very different values for the federal funds rate, both for the present and for the future." Indeed.

September 7, 2017 in Business Cycles, Data Releases, Economics, Monetary Policy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

April 19, 2017


The Fed’s Inflation Goal: What Does the Public Know?

The Federal Open Market Committee (FOMC) has had an explicit inflation target of 2 percent since January 25, 2012. In its statement announcing the target, the FOMC said, "Communicating this inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability and moderate long-term interest rates and enhancing the Committee's ability to promote maximum employment in the face of significant economic disturbances."

If communicating this goal to the public enhances the effectiveness of monetary policy, one natural question is whether the public is aware of this 2 percent target. We've posed this question a few times to our Business Inflation Expectations Panel, which is a set of roughly 450 private, nonfarm firms in the Southeast. These firms range in size from large corporations to owner operators.

Last week, we asked them again. Specifically, the question is:

What annual rate of inflation do you think the Federal Reserve is aiming for over the long run?

Unsurprisingly, to us at least—and maybe to you if you're a regular macroblog reader—the typical respondent answered 2 percent (the same answer our panel gave us in 2015 and back in 2011). At a minimum, southeastern firms appear to have gotten and retained the message.

So, why the blog post? Careful Fed watchers noticed the inclusion of a modifier to describe the 2 percent objective in the March 2017 FOMC statement (emphasis added): "The Committee will carefully monitor actual and expected inflation developments relative to its symmetric inflation goal." And especially eagle-eyed Fed watchers will remember that the Committee amended  its statement of longer-run goals in January 2016, clarifying that its inflation objective is indeed symmetric.

The idea behind a symmetric inflation target is that the central bank views both overshooting and falling short of the 2 percent target as equally bad. As then Minneapolis Fed President Kocherlakota stated in 2014, "Without symmetry, inflation might spend considerably more time below 2 percent than above 2 percent. Inflation persistently below the 2 percent target could create doubts in households and businesses about whether the FOMC is truly aiming for 2 percent inflation, or some lower number."

Do such doubts actually exist? In a follow-up to our question about the numerical target, in the latest survey we asked our panel whether they thought the Fed was more, less, or equally likely to tolerate inflation below or above its targe. The following chart depicts the responses.

One in five respondents believes the Federal Reserve is more likely to accept inflation above its target, while nearly 40 percent believe it is more likely to accept inflation below its target. Twenty-five percent of firms believe the Federal Reserve is equally likely to accept inflation above or below its target. The remainder of respondents were unsure. This pattern was similar across firm sizes and industries.

In other words, more firms see the inflation target as a threshold (or ceiling) that the Fed is averse to crossing than see it as a symmetric target.

Lately, various Committee members (here, here, and in Chair Yellen's latest press conference at the 42-minute mark) have discussed the symmetry about the Committee's inflation target. Our evidence suggests that the message may not have quite sunk in yet.



April 19, 2017 in Business Inflation Expectations, Federal Reserve and Monetary Policy, Inflation, Monetary Policy | Permalink

Comments

Maybe the message hasn't sunk in because actions speak louder than words, and the Fed seems to act like 2% is a ceiling?

Posted by: Mark Witte | April 22, 2017 at 11:33 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

March 30, 2017


Bad Debt Is Bad for Your Health

The amount of debt held by U.S. households grew steadily during the 2000s, with some leveling off after the recession. However, the level of debt remains elevated relative to the turn of the century, a fact easily seen by examining changes in debt held by individuals from 2000 to 2015 (the blue line in the chart below).

Not only is the amount of debt elevated for U.S. households, but the proportion of delinquent household debt has also fluctuated significantly, as the red line in the above chart depicts.

The amount of debt that is severely delinquent (90 days or more past due) peaked during the last recession and remains above prerecession levels. The Federal Reserve Bank of New York reports  these measures of financial health quarterly.

In a recent working paper, we demonstrate a potential causal link between these fluctuations in delinquency and mortality. (A recent Atlanta Fed podcast episode  also discussed our findings.) By isolating unanticipated variations in debt and delinquency not caused by worsening health, we show that carrying debt—and delinquent debt in particular—has an adverse effect on mortality rates.

Our results suggest that the decline in the quality of debt portfolios during the Great Recession was associated with an additional 5.7 deaths per 100,000 people, or just over 12,000 additional deaths each year during the worst part of the recession (a calculation based on census population estimates found here). To put this rate in perspective, in 2014 the death rate from homicides was 5.0 per 100,000 people, and motor vehicle accidents caused 10.7 deaths per 100,000 people.

It is well understood that an individual experiencing a large and unexpected decline in health can encounter financial difficulties, and that this sort of event is a major cause of personal bankruptcy. Our findings suggest that significant unexpected financial problems can themselves lead to worse health outcomes. This link between delinquent debt and health outcomes provides more reason for public policy discussions to take seriously the nexus between financial well-being and public health.

March 30, 2017 in Economic conditions, Monetary Policy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

December 16, 2016


The Impact of Extraordinary Policy on Interest and Foreign Exchange Rates

Central banks in the developed countries have adopted a variety of extraordinary measures since the financial crisis, including large-scale asset purchases and very low (and in some cases negative) policy rates in an effort to boost economic activity. The Atlanta Fed recently hosted a workshop titled "The Impact of Extraordinary Monetary Policy on the Financial Sector," which discussed these measures. This macroblog post discusses the highlights of three papers related to the impact of such policy on interest rates and foreign exchange rates. A companion Notes from the Vault reviews papers that examined how those policies may have affected financial institutions, including their lending.

Prior to the crisis, central banks targeted short-term interest rates as a way of influencing the rest of the yield curve, which in turn affected aggregate demand. However, as short-term rates approached zero, central banks' ability to further cut their target rate diminished. As a substitute, the central banks of many developed countries (including the Federal Reserve, the European Central Bank, and the Bank of Japan) began to undertake large-scale purchases of bonds in an attempt to influence longer-term rates.

Central bank asset purchases appear to have had some beneficial effect, but exactly how these purchases influenced rates has remained an open question. One of the leading hypotheses is that the purchases did not have any direct effect, but rather served as a signal that the central bank was committed to maintaining very low short-term rates for an extended period. A second hypothesis is that central bank purchases of longer-dated obligations resulted in long-term investors bidding up the price of remaining longer-maturity government and private debt.

The second hypothesis was tested in a paper  by Federal Reserve Board economists Jeffrey Huther, Jane Ihrig, Elizabeth Klee, Alexander Boote and Richard Sambasivam. Their starting point was the view that a "neutral" policy would have the Fed's System Open Market Account (SOMA) closely match the distribution of the stock of outstanding Treasury securities. In their statistical tests, they find support for the hypothesis that deviations from this neutrality should influence market rates. In particular, they find that the term premium in longer-term rates declines significantly as the duration of the SOMA portfolio grows relative to that of the stock of outstanding Treasury debt.

The central banks' large-scale asset purchases not only took longer-dated assets out of the economy, but they also forced banks to increase their holdings of reserves. Large central banks now pay interest on reserves (or in some cases charge interest on reserve holdings) at an overnight rate that the central bank can change at any time. As a result, these purchases can significantly reduce the average duration (or maturity) of a bank's portfolio below what the banks found optimal given the term structure that existed prior to the purchases. Jens H. E. Christensen from the Federal Reserve Bank of San Francisco and Signe Krogstrup from the International Monetary Fund have a paper  in which they hypothesize that banks respond to this shortening of duration by bidding up the price of longer-dated securities (thereby reducing their yield) to restore optimality.

The difficulty with testing Christensen and Krogstrup's hypothesis is that in most cases central banks were expanding bank reserves by buying longer-dated securities, thus making it difficult to disentangle their respective effects. However, in 2011 the Swiss National Bank undertook a series of three policy moves designed to produce a large, rapid increase in bank reserves. Importantly, these moves were an attempt to counter perceived overvaluation of the Swiss franc and did not involve the purchase of longer-dated bonds. In a follow-up empirical paper , Christensen and Krogstrup exploit this unique policy setting to test whether Swiss bond rates declined in response to the increase in reserves. They find that the third and largest of these increases in reserves was associated with a statistically and economically significant fall in term premia, implying that the increase did lower longer-term rates.

Although developed countries' monetary policy has focused on their domestic economies, these policies can have significant spillovers into emerging countries. Large changes in the rates of return available in developed countries can lead investors to shift funds into and out of emerging countries, causing potentially undesirable large swings in the foreign exchange rate of these emerging countries. Developing countries' central banks may try to counteract these swings via intervention in the foreign exchange market, but the effectiveness of sterilized intervention is the subject of some debate. (Sterilized intervention occurs when the central bank buys or sells foreign currency, but then takes offsetting measures to prevent these from changing bank reserves.)

Once again, determining whether exchange rates are influenced and, if so, by what mechanism can be econometrically difficult. Marcos Chamon from the International Monetary Fund, Márcio Garcia from PUC-Rio, and Laura Souza from Itaú Unibanco examine the efforts of the Brazilian Central Bank to stabilize the Brazilian real in the aftermath of the so-called "taper tantrum." The taper tantrum is the name given to the sharp jump in U.S. bond yields and the foreign exchange rate value of the U.S. dollar after the May 23, 2013, statement by Board Chair Ben Bernanke that the Federal Reserve would slow (or taper) the rate at which it was purchasing Treasury bonds (see a brief essay by Christopher J. Neely). Chamon, Garcia, and Souza's paper  takes advantage of the fact that Brazil preannounced its intervention policy, which allows them to separate the impact of the announcement to intervene from the intervention itself. They find that the Brazilian Central Bank's intervention was effective in strengthening the value of the real relative to a basket of comparable currencies.

All three of the studies faced the difficult challenge in linking specific central bank actions to policy outcomes, and each tackled the challenge in innovative ways. The evidence provided by the studies suggests that central banks can use extraordinary policies to influence interest and foreign exchange rates.

December 16, 2016 in Exchange Rates and the Dollar, Interest Rates, Monetary Policy | Permalink

Comments

The assumption of your analysis ignores the excess bank reserves globally. For the first time in modern history since the Great Depression, the supply of excess bank reserves is greater than 0, currently at $2 trillion in the US. When excess bank reserves are greater 0 the interest rate paid on cash naturally goes to 0%. During the Great Depression and after 2008, the lack of demand for capital causes monetary policy to find new tools. Quantitative easing was advertised as stimulative but in reality was meant to act as a buyer last resort protecting the banking system from illiquidity. The unintended consequence is the weakening of the currency as the monetary base increases.

In the past, monetary tools were used to influence interest rates when the supply and demand of excess bank reserves were in an equilibrium at 0. A central bank could easily disrupt the equilibrium to affect interest rates with a very small balance sheet. Currently, the Fed must use its historically large $4 trillion balance sheet to pay interest on excess bank to raise interest rates. No longer can central bank influence interest rates without dealing with the excess bank reserves and lack of demand for capital.

Monetary policy must find the path for the greatest economic growth with low inflation and moderate long-term interest rates. When viewed through the lenses of excess bank reserves; the behavior of interest rates, the creation of capital and the mathematics, all fall into place. When the supply of cash is greater than demand the interest rate paid for cash naturally trades at 0% without central bank intervention. When cash trades near 0%, a small move in interest rates has a large nonlinear impact due to the effect of leverage on capital. During this period, global competition for good jobs and the lack of demand for capital keeps inflationary pressures at a minimum. Any interest rate paid or charged on the excess cash by the central banks is artificial and runs the risk of harming the economy or even worse creating asset bubbles. When central banks allow interest rates on cash to naturally be at 0%, this stimulates the economy until demand picks up again and interest rates naturally rise.

http://www.unicornfunds.com/macro/whitepaper_monetarypolicyforaglobaleconomy.html

Posted by: Peter del Rio | December 22, 2016 at 01:41 AM

Monetary policy must find the path for the greatest economic growth with low inflation and moderate long-term interest rates. When viewed through the lenses of excess bank reserves; the behavior of interest rates, the creation of capital and the mathematics, all fall into place. When the supply of cash is greater than demand the interest rate paid for cash naturally trades at 0% without central bank intervention. When cash trades near 0%, a small move in interest rates has a large nonlinear impact due to the effect of leverage on capital. During this period, global competition for good jobs and the lack of demand for capital keeps inflationary pressures at a minimum. Any interest rate paid or charged on the excess cash by the central banks is artificial and runs the risk of harming the economy or even worse creating asset bubbles. When central banks allow interest rates on cash to naturally be at 0%, this stimulates the economy until demand picks up again and interest rates naturally rise.

http://www.unicornfunds.com/macro/whitepaper_monetarypolicyforaglobaleconomy.html

Posted by: Peter del Rio | December 22, 2016 at 01:49 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 08, 2016


Introducing the Atlanta Fed's Taylor Rule Utility

Simplicity isn't always a virtue, but when it comes to complex decision-making processes—for example, a central bank setting a policy rate—having simple benchmarks is often helpful. As students and observers of monetary policy well know, the common currency in the central banking world is the so-called "Taylor rule."

The Taylor rule is an equation introduced by John Taylor in a seminal 1993 paper that prescribes a value for the federal funds rate—the interest rate targeted by the Federal Open Market Committee (FOMC)—based on readings of inflation and the output gap. The output gap measures the percentage point difference between real gross domestic product (GDP) and an estimate of its trend or potential.

Since 1993, academics and policymakers have introduced and used many alternative versions of the rule. The alternative forms of the rule can supply policy prescriptions that differ significantly from Taylor's original rule, as the following chart illustrates.

Effective federal funds rate and prescriptions from alternative versions of the Taylor rule
(enlarge)

The green line shows the policy prescription from a rule identical to the one in Taylor's paper, apart from some minor changes in the inflation and output gap measures. The red line uses an alternative and commonly used rule that gives the output gap twice the weight used for the Taylor (1993) rule, derived from a 1999 paper by John Taylor. The red line also replaces the 2 percent value used in Taylor's 1993 paper with an estimate of the natural real interest rate, called r*, from a paper by Thomas Laubach, the Federal Reserve Board's director of monetary affairs, and John Williams, president of the San Francisco Fed. Federal Reserve Chair Janet Yellen also considered this alternative estimate of r* in a 2015 speech.

Both rules use real-time data. The Taylor (1993) rule prescribed liftoff for the federal funds rate materially above the FOMC's 0 to 0.25 percent target range from December 2008 to December 2015 as early as 2012. The alternative rule did not prescribe a positive fed funds rate since the end of the 2007–09 recession until this quarter. The third-quarter prescriptions incorporate nowcasts constructed as described here. Neither the nowcasts nor the Taylor rule prescriptions themselves necessarily reflect the outlook or views of the Federal Reserve Bank of Atlanta or its president.

Additional variables that get plugged into this simple policy rule can influence the rate prescription. To help you sort through the most common variations, we at the Atlanta Fed have created a Taylor Rule Utility. Our Taylor Rule Utility gives you a number of choices for the inflation measure, inflation target, the natural real interest rate, and the resource gap. Besides the Congressional Budget Office–based output gap, alternative resource gap choices include those based on a U-6 labor underutilization gap and the ZPOP ratio. The latter ratio, which Atlanta Fed President Dennis Lockhart mentioned in a November 2015 speech while addressing the Taylor rule, gauges underemployment by measuring the share of the civilian population working their desired number of hours.

Many of the indicator choices use real-time data. The utility also allows you to establish your own weight for the resource gap and whether you want the prescription to put any weight on the previous quarter's federal funds rate. The default choices of the Taylor Rule Utility coincide with the Taylor (1993) rule shown in the above chart. Other organizations have their own versions of the Taylor Rule Utility (one of the nicer ones is available on the Cleveland Fed's Simple Monetary Policy Rules web page). You can find more information about the Cleveland Fed's web page on the Frequently Asked Questions page.

Although the Taylor rule and its alternative versions are only simple benchmarks, they can be useful tools for evaluating the importance of particular indicators. For example, we see that the difference in the prescriptions of the two rules plotted above has narrowed in recent years as slack has diminished. Even if the output gap were completely closed, however, the current prescriptions of the rules would differ by nearly 2 percentage points because of the use of different measures of r*. We hope you find the Taylor Rule Utility a useful tool to provide insight into issues like these. We plan on adding further enhancements to the utility in the near future and welcome any comments or suggestions for improvements.

September 8, 2016 in Banking, Federal Reserve and Monetary Policy, Monetary Policy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 18, 2016


What’s Moving the Market’s Views on the Path of Short-Term Rates?

As today's previous macroblog post highlighted, it seems that the United Kingdom's vote to leave the European Union—commonly known as the Brexit—got the attention of business decision makers and made their business outlook more uncertain.

How might this uncertainty be weighing on financial market assessments of the future path for Fed policy? Several recent articles have opined, often citing the CME Group's popular FedWatch tool, that the Brexit vote increased the probability that the Federal Open Market Committee (FOMC) might reverse course and lower its target for the fed funds rate. For instance, the Wall Street Journal reported on June 28 that fed funds futures contracts implied a 15 percent probability that rates would increase 25 basis points and an 8 percent probability of a 25 basis point decrease by December's meeting. Prior to the Brexit vote, the probabilities of a 25 basis point increase and decrease by December's meeting were roughly 50 percent and 0 percent, respectively.

One limitation of using fed funds futures to assess market participant views is that this method is restricted to calculating the probability of a rate change by a fixed number of basis points. But what if we want to consider a broader set of possibilities for FOMC rate decisions? We could look at options on fed funds futures contracts to infer these probabilities. However, since the financial crisis their availability has been quite limited. Instead, we use options on Eurodollar futures contracts.

Eurodollars are deposits denominated in U.S. dollars but held in foreign banks or in the foreign branches of U.S. banks. The rate on these deposits is the (U.S. dollar) London Interbank Offered Rate (LIBOR). Because Eurodollar deposits are regulated similarly to fed funds and can be used to meet reserve requirements, financial institutions often view Eurodollars as close substitutes for fed funds. Although a number of factors can drive a wedge between otherwise identical fed funds and Eurodollar transactions, arbitrage and competitive forces tend to keep these differences relatively small.

However, using options on Eurodollar futures is not without its own challenges. Three-month Eurodollar futures can be thought of as the sum of an average three-month expected overnight rate (the item of specific interest) plus a term premium. Each possible target range for fed funds is associated with its own average expected overnight rate, and there may be some slippage between these two. Additionally, although we can use swaps market data to estimate the expected term premium, uncertainty around this expectation can blur the picture somewhat and make it difficult to identify specific target ranges, especially as we look farther out into the future.

Despite these challenges, we feel that options on Eurodollar futures can provide a complementary and more detailed view on market expectations than is provided by fed funds futures data alone.

Our approach is to use the Eurodollar futures option data to construct an entire probability distribution of the market's assessment of future LIBOR rates. The details of our approach can be found here. Importantly, our approach does not assume that the distribution will have a typical bell shape. Using a flexible approach allows multiple peaks with different heights that can change dynamically in response to market news.

The results of this approach are illustrated in the following two charts for contracts expiring in September (left-hand chart) and December (right-hand chart) of this year for the day before and the day after Brexit. With these distributions in hand, we can calculate the implied probabilities of a rate change consistent with what you would get if you simply used fed funds futures. However, we think that specific features of the distributions help provide a richer story about how the market is processing incoming information.

Prior to the Brexit vote (depicted by the green curve), market participants were largely split in their assessment on a rate increase through September's FOMC meeting, as indicated by the two similarly sized modes, or peaks, of the distribution. Post-Brexit (depicted by the blue curve), most weight was given to no change, but with a non-negligible probability of a rate cut (the mode on the left between 0 and 25 basis points). For December's FOMC meeting, market participants shifted their views away from the likelihood of one additional increase in the fed funds target toward the possibility that the FOMC leaves rates where they are currently.

The market turmoil immediately following the vote subsided somewhat over the subsequent days. The next two charts indicate that by July 7, market participants seem to have backed away from the assessment that a rate cut may occur this year, evidenced by the disappearance of the mode between 0 and 25 basis points (show by the green curve). And following the release of the June jobs report from the U.S. Bureau of Labor Statistics on July 8, market participants increased their assessment of the likelihood of a rate hike by year end, though not by much (see the blue curve). However, the labor report was, by itself, not enough to shift the market view that the fed funds target is unlikely to change over the near future.

One other feature of our approach is that comparing the heights of the modes across contracts allows us to assess the market's relative certainty of particular outcomes. For instance, though the market continues to put the highest weight on "no move" for both September and December, we can see that the market is much less certain regarding what will happen by December relative to September.

The greater range of possible rates for December suggests that there is still considerable market uncertainty about the path of rates six months out and farther. And, as we saw with the labor report release, incoming data can move these distributions around as market participants assess the impact on future FOMC deliberations.



July 18, 2016 in Europe, Interest Rates, Monetary Policy | Permalink

Comments

Could you share more details about the methodology. The Simplex Regression paper linked under "The details of our approach can be found here" above is too general. Thank you.

Posted by: Hong Le | July 21, 2016 at 11:31 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 16, 2016


Experts Debate Policy Options for China's Transition

After nearly three decades of rapid economic growth, China today faces the challenge of economic rebalancing against the backdrop of slow and uncertain global growth. Although investment and exports have been a motor for growth, China is increasingly experiencing structural issues: widening inequality, overcapacity as a consequence of policy distortions, unsustainable environmental costs, volatile financial markets, and rising systemic risk.

On April 28–29, I attended the First Research Workshop on China's Economy, organized jointly by the International Monetary Fund (IMF) and the Atlanta Fed. The workshop, held at the IMF's headquarters in Washington DC, explored a series of questions that have emerged as China shifts toward a new growth model. Is this the end of the growth miracle? Will the Chinese renminbi one day be as important as the U.S. dollar? Should the rapidly increasing shadow banking activity in China be a source of concern? How worrisome is the rapid rise in China's housing prices?

Panelists shared their views on these and other issues facing the world's second-largest economy (or largest, if measured on a purchasing-power-parity basis). Plans are under way for a second workshop to be held in 2017.

The following is a nice summary of the research discussed at the workshop. It was originally published in the IMF Survey Magazine, and was written by Hui He, IMF Institute for Capacity Development, and Nan Li, IMF Research Department. Thanks to the IMF for allowing me to repost it here.

Is China's economic growth sustainable?
Understanding the source of China's tremendous growth was a recurring theme at the workshop. "China's economy combines enormous dynamism with huge distortions," observed Loren Brandt (University of Toronto). Brandt described his research based on China's firm-level data and emphasized that firm dynamics (entry and exit), especially firm entry, have been the main source of the productivity growth in the manufacturing sector.

Echoing Brandt's message, Kjetil Storesletten (University of Oslo) discussed regional growth disparities and showed that barriers preventing firms from entering an industry account for most of the disparities. Such barriers are more severe for privately owned firms in regions in which state-owned enterprises (SOE) dominate, he said.

In his keynote speech, Nicholas Lardy (Peterson Institute for International Economics) offered an upbeat view on China's transition to a new growth model, one in which the service sector plays a larger role than manufacturing. The bright side of the service sector, he noted, is its continued strong productivity growth. The development of financial deepening and the stronger social safety net are contributing to increased consumption, which helps to rebalance the economy.

However, he emphasized, SOE reforms remain critical as the service sector cannot provide a silver bullet for a successful transition.

Central bank's policy decisions
Several participants tried to discern how the People's Bank of China (PBC) conducts monetary policy. Tao Zha (of the Atlanta Fed's Center for Quantitative Economic Research and Emory University) found that the PBC reacts sharply when the gross domestic product's growth rate falls below its target, increasing the money supply by 11.5 percentage points for every 1 percentage point shortfall.

Mark Spiegel (Center for Pacific Basin Studies) discussed the trade-offs involved in Chinese monetary policy—for example, controlling the exchange rate versus maintaining inflation stability. He also argued that the heavy use of reserve requirements on banks as a monetary policy tool might have an unintentional consequence to reallocate capital from SOEs to more efficient privately owned firms and could therefore offset the resource misallocation caused by the easy credit to SOEs that banks granted in the high growth years.

Renminbi versus the dollar
Eswar Prasad (Cornell University and Brookings Institution) argued that China's capital account will become more open and the renminbi will be used more widely to denominate and settle cross-border transactions. But he also noted that legal and institutional constraints in China were likely to prevent the renminbi from serving as a safe-haven currency as the U.S. dollar does today.

Moreover, he said, the current sequencing of liberalization initiatives—that is, removal of capital account restrictions before appropriate financial market supervision and regulation and exchange rate reform—poses financial stability risks.

Shadow banking and the housing market
Recently, volatile Chinese financial markets and continued housing price appreciation have raised serious financial stability concerns.

Michael Song (Chinese University of Hong Kong) argued that rapidly rising shadow banking activity is an unintended consequence of financial regulation. Restrictions on deposit rates and loan-to-deposit ratios have led to the issuance by banks of "wealth management products" to attract savers with higher returns. Because these restrictions had a greater impact on small banks, the big state banks had more room to undercut the smaller banks by offering wealth management products with higher returns and then restricting liquidity to them in interbank markets, ultimately making the banking system more prone to liquidity distress and runs.

Hanming Fang (University of Pennsylvania) found that, except in big cities such as Beijing and Shanghai, housing prices in China's urban areas between 2003 and 2013 more or less tracked rising household incomes. In his view, the Chinese housing boom is thus unlikely to trigger an imminent financial crisis. He warned, however, that housing prices may fall rapidly if economic growth slows dramatically, and that such a development could, in turn, amplify the economic downturn.

Rising wage inequality
China's rapid growth over the past two decades has been accompanied by rising wage inequality, an issue highlighted by two conference participants. Dennis Yang (University of Virginia) explored the distributional effects of trade openness in China and found a significant impact on wage inequality of China's accession to the World Trade Organization in 2001.

Chong-En Bai (Tsinghua University) argued that the decline after 2008 of the skill premium—that is, the ratio of the skilled labor wage to the unskilled labor wage—can be explained by the Chinese government's targeted credit extension to unskilled labor-intensive infrastructure sector (as part of the fiscal stimulus following the global financial crisis). Such distortionary policies might have short-run growth benefits but could lead to long-run welfare losses, he said, especially when rural-to-urban migration has run its course.

June 16, 2016 in Asia, Economic Growth and Development, Labor Markets, Monetary Policy, Real Estate | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 06, 2016


After the Conference, Another Look at Liquidity

When it comes to assessing the impact of central bank asset purchase programs (often called quantitative easing or QE), economists tend to focus their attention on the potential effects on the real economy and inflation. After all, the Federal Reserve's dual mandate for monetary policy is price stability and full employment. But there is another aspect of QE that may also be quite important in assessing its usefulness as a policy tool: the potential effect of asset purchases on financial markets through the collateral channel.

Asset purchase programs involve central bank purchases of large quantities of high-quality, highly liquid assets. Postcrisis, the Fed has purchased more than $3 trillion of U.S. Treasury securities and agency mortgage-backed securities, the European Central Bank (ECB) has purchased roughly 727 billion euros' worth of public-sector bonds (issued by central governments and agencies), and the Bank of Japan is maintaining an annual purchase target of 80 trillion yen. These bonds are not merely assets held by investors to realize a return; they are also securities highly valued for their use as collateral in financial transactions. The Atlanta Fed's 21st annual Financial Markets Conference explored the potential consequences of these asset purchase programs in the context of financial market liquidity.

The collateral channel effect focuses on the role that these low-risk securities play in the plumbing of U.S. financial markets. Financial firms fund a large fraction of their securities holdings in the repurchase (or repo) markets. Repurchase agreements are legally structured as the sale of a security with a promise to repurchase the security at a fixed price at a given point in the future. The economics of this transaction are essentially similar to those of a collateralized loan.

The sold and repurchased securities are often termed "pledged collateral." In these transactions, which are typically overnight, the lender will ordinarily lend cash equal to only a fraction of the securities value, with the remaining unfunded part called the "haircut." The size of the haircut is inversely related to the safety and liquidity of the security, with Treasury securities requiring the smallest haircuts. When the securities are repurchased the following day, the borrower will pay back the initial cash plus an additional amount known as the repo rate. The repo rate is essentially an overnight interest rate paid on a collateralized loan.

Central bank purchases of Treasury securities may have a multiplicative effect on the potential efficiency of the repo market because these securities are often used in a chain of transactions before reaching a final holder for the evening. Here's a great diagram presented by Phil Prince of Pine River Capital Management illustrating the role that bonds and U.S. Treasuries play in facilitating a variety of transactions. In this example, the UST (U.S. Treasury) securities are first used as collateral in an exchange between the UST securities lender and the globally systemically important financial institution (GSIFI bank/broker dealer), then between the GSIFI bank and the cash provider, a money market mutual fund (MMMF), corporation, or sovereign wealth fund (SWF). The reuse of the UST collateral reduces the funding cost of the GSIFI bank and, hence, the cost to the levered investor/hedge fund who is trying to exploit discrepancies in the pricing of a corporate bond and stock.

Just how important or large is this pool of reusable collateral? Manmohan Singh of the International Monetary Fund presented the following charts, depicting the pledged collateral at major U.S. and European financial institutions that can be reused in other transactions.

So how do central bank purchases of high-quality, liquid assets affect the repo market—and why should macroeconomists care? In his presentation, Marvin Goodfriend of Carnegie Mellon University concluded that central bank asset purchases, which he terms "pure monetary policy," lower short-term interest rates (especially bank-to-bank lending) but increase the cost of funding illiquid assets through the repo market. And Singh noted that repo rates are an important part of the constellation of short-term interest rates and directly link overnight markets with the longer-term collateral being pledged. Thus, the interaction between a central bank's interest-rate policy and its balance sheet policy is an important aspect of the transmission of monetary policy to longer-term interest rates and real economic activity.

Ulrich Bindseil, director of general market operations at the ECB, discussed a variety of ways in which central bank actions may affect, or be affected by, bond market liquidity. One way that central banks may mitigate any adverse impact on market liquidity is through their securities lending programs, according to Bindseil. Central banks use such programs to lend particular bonds back out to the market to "provide a secondary and temporary source of securities to the financing market...to promote smooth clearing of Treasury and Agency securities."

On June 2, for example, the New York Fed lent $17.8 billion of UST securities from the Fed's portfolio. These operations are structured as collateral swaps—dealers pledge other U.S. Treasury bonds as collateral with the Fed. During the financial crisis, the Federal Reserve used an expanded version of its securities lending program called the Term Securities Lending Facility to allow firms to replace lower-quality collateral that was difficult to use in repo transactions with Treasury securities.

Finally, the Fed currently releases some bonds to the market each day in return for cash, through its overnight reverse repo operations, a supplementary facility used to support control of the federal funds rate as the Federal Open Market Committee proceeds with normalization. However, this release has an important limitation: these operations are conducted in the triparty repo market, and the bonds released through these operations can be reused only within that market. In contrast, if the Fed were to sell its U.S. Treasuries, the securities could not only be used in the triparty repo market but also as collateral in other transactions including ones in the bilateral repo market (you can read more on these markets here). As long as central bank portfolios remain large and continue to grow as in Europe and Japan, policymakers are integrally linked to the financial plumbing at its most basic level.

To see a video of the full discussion of these issues as well as other conference presentations on bond market liquidity, market infrastructure, and the management of liquidity within financial institutions, please visit Getting a Grip on Liquidity: Markets, Institutions, and Central Banks. My colleague Larry Wall's conference takeaways on the elusive definition of liquidity, along with the impact of innovation and regulation on liquidity, are here.

June 6, 2016 in Banking, Financial System, Interest Rates, Monetary Policy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

November 04, 2014


Data Dependence and Liftoff in the Federal Funds Rate

When asked "at which upcoming meeting do you think the FOMC [Federal Open Market Committee] will FIRST HIKE its target for the federal funds rate," 46 percent of the October Blue Chip Financial Forecasts panelists predicted that "liftoff" would occur at the June 2015 meeting, and 83 percent chose liftoff at one of the four scheduled meetings in the second and third quarters of next year.

Of course, this result does not imply that there is an 83 percent chance of liftoff occurring in the middle two quarters of next year. Respondents to the New York Fed's most recent Primary Dealer Survey put this liftoff probability for the middle two quarters of 2015 at only 51 percent. This more relatively certain forecast horizon for mid-2015 is consistent with the "data-dependence principle" that Chair Yellen mentioned at her September 17 press conference. The idea of data dependence is captured in this excerpt from the statement following the October 28–29 FOMC meeting:

[I]f incoming information indicates faster progress toward the Committee's employment and inflation objectives than the Committee now expects, then increases in the target range for the federal funds rate are likely to occur sooner than currently anticipated. Conversely, if progress proves slower than expected, then increases in the target range are likely to occur later than currently anticipated.

If the timing of liftoff is indeed data dependent, a natural extension is to gauge the likely "liftoff reaction function." In the current zero-lower bound (ZLB) environment, researchers at the University of North Carolina and the St. Louis Fed have analyzed monetary policy using shadow fed funds rates, shown in figure 1 below, estimated by Wu and Xia (2014) and Leo Krippner.

Unlike the standard fed funds rate, a shadow rate can be negative at the ZLB. The researchers found that the shadow rates, particularly Krippner's, act as fairly good proxies for monetary policy in the post-2008 ZLB period. Krippner also produces an expected time to liftoff, estimated from his model, shown in figure 1 above. His model's liftoff of December 2015 is six months after the most likely liftoff month identified by the aforementioned Blue Chip survey.

I included Krippner's shadow rate (spliced with the standard fed funds rate prior to December 2008) in a monthly Bayesian vector autoregression alongside the six other variables shown in figure 2 below.

The model assumes that the Fed cannot see contemporaneous values of the variables when setting the spliced policy—that is, the fed funds/shadow rate. This assumption is plausible given the approximately one-month lag in economic release dates. The baseline path assumes (and mechanically generates) liftoff in June 2015 with outcomes for the other variables, shown by the black lines, that roughly coincide with professional forecasts.

The alternative scenarios span the range of eight possible outcomes for low inflation/baseline inflation/high inflation and low growth/baseline growth/high growth in the figures above. For example, in figure 2 above, the high growth/low inflation scenario coincides with the green lines in the top three charts and the red lines in the bottom three charts. Forecasts for the spliced policy rate are conditional on the various growth/inflation scenarios, and "liftoff" in each scenario occurs when the spliced policy rate rises above the midpoint of the current target range for the funds rate (12.5 basis points).

The outcomes are shown in figure 3 below. At one extreme—high growth/high inflation—liftoff occurs in March 2015. At the other—low growth/low inflation—liftoff occurs beyond December 2015.

One should not interpret these projections too literally; the model uses a much narrower set of variables than the FOMC considers. Nonetheless, these scenarios illustrate that the model's forecasted liftoffs in the spliced policy rate are indeed consistent with the data-dependence principle.

Photo of Pat HigginsBy Pat Higgins, senior economist in the Atlanta Fed's research department

November 4, 2014 in Economics, Employment, Federal Reserve and Monetary Policy, Forecasts, Inflation, Monetary Policy | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef01b7c7003c6e970b

Listed below are links to blogs that reference Data Dependence and Liftoff in the Federal Funds Rate:

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


September 2017


Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Archives


Categories


Powered by TypePad