August 28, 2008
Monitoring the inflation bees, striving not to get stung
As pure blogging fodder, Willem Buiter’s Jackson Hole “advice” to the Fed is a gift that keeps on giving. From Buiter’s paper:
“It has not always been clear whether the Fed actually targets core inflation or whether it targets headline inflation in the medium term and treats core inflation as the best predictor of medium-term inflation.”
As luck would have it, our boss (Federal Reserve Bank of Atlanta President Dennis Lockhart) had a few words to say on exactly that topic yesterday:
“Attempts to measure the aggregate rate of price change—no matter how sophisticated—remain imperfect. As a result, when it comes to measuring inflation, judgment is needed to distinguish persistent price movements that underlie overall inflation from the relative price adjustments. Separating the inflation signal from noise involves much uncertainty—especially when making decisions in real time. Discerning accurately the underlying trend is difficult.”
The difficulty of precisely “separating the inflation signal from the noise” is not a new problem. In fact, this difficulty can be traced at least as far back as the development of index numbers to measure economic aggregates—and aggregate inflation in particular—by the famous economist Irving Fisher. Fisher struggled with the idea of being able to separate out the general movement in prices from the relative price disturbances:
“It would be idle to expect a uniform movement in prices as to expect a uniform movement for bees in a swarm. On the other hand, it would be as idle to deny the existence of a general movement of prices ... as to deny a general movement of a swarm of bees because the individual bees have different movements.”
The distinction between the direction of the swarm and the individual bees is an important one that is the direct path to discussions of measures of “core” inflation. The conceptual issues were nicely articulated in a recent article by Dallas Fed economist Mark Wynne, and they go something like this: Suppose we thought of the percent changes in prices of individual goods and services between two periods as containing a common component (core) and price changes that are unique to the supply and demand conditions of a particular products markets. The object of our desire (the honey if you like) is the level and direction of the common component. The problem is how to measure it.
In his commentary on the “will-o’-the-wisp of ‘core’ inflation,” Professor Buiter decides on a selective concept of core:
“The only measure of core inflation I shall discuss is the one used by the Fed, that is the inflation rate of the standard headline CPI or PCE deflator excluding food and energy prices. Other approaches to measuring core inflation… will not be considered.”
We added the emphasis because we want to contrast that comment with these words from President Lockhart’s speech:
“It is essential for those of us who have responsibility for responding to these trends to use a wide variety of core measures and inflation projections to make the most informed judgment we can.”
The variety of core measures is in fact wide. Some are familiar—the traditional statistics that exclude food and energy prices, the Cleveland Fed median CPI, and the Dallas Fed trimmed-mean PCE are examples. Some important, but less familiar, measures focus on persistence over time in individual price changes and exploit correlation over time in the common and product-specific components. Work by Michael Bryan and Steven Cecchetti and Domenico Giannone and Tyler Matheson are examples. An alternative approach is to define core inflation by decomposing headline inflation measures into permanent and transitory components, identifying core inflation as the permanent component. Examples include research by Jim Nason and James Stock and Mark Watson.
Michael Kiley just recently extended the Stock and Watson approach and found the common trend in inflation during the 1970s and early 1980s was attributable to persistent movements in both energy/food and nonenergy/food prices. More recently, that trend has been less influenced by food and energy inflation.
Maybe that isn’t all good news. It is noteworthy that the traditional measures of underlying trend inflation have moved higher over the past year or so, some more than others.
As President Lockhart noted at several points in his remarks yesterday:
“No matter how you measure it, the aggregate inflation we are experiencing in the United States at the moment is uncomfortably high…
“Measures of core inflation in the United States suggest that overall price pressures have been on the rise, perhaps because higher commodities costs have begun to affect prices paid by consumers and businesses across a broader range of other goods and services…
“I'm acutely aware that the current FOMC has inherited the inflation policy credibility that was hard won by our predecessors. One thing that has impressed me since taking my position last year is the seriousness with which my colleagues approach the duty to protect that legacy. I am confident that the Federal Reserve's institutional commitment to maintaining low and stable inflation will prevail.”
Professor Buiter raised several theoretical challenges to the core inflation concept that deserve discussion. It really is time, however, to lay to rest the straw-man assertion that central bankers are diverted by a pursuit of single and overly simplistic notions of core inflation.
TrackBack URL for this entry:
Listed below are links to blogs that reference Monitoring the inflation bees, striving not to get stung:
August 26, 2008
Deep questions from Jackson Hole
If one were to judge importance by press attention, the key events of this past weekend’s annual Jackson Hole economic symposium hosted by the Federal Reserve Bank of Kansas City would be the bookend contributions of Fed Chairman Ben Bernanke’s opening address and Willem Buiter’s 141 pages worth of Fed criticism. Understandable, in the former case for obvious reasons and in the latter for the grand theoretical pleasure of Buiter’s (shall we say) forthright critique and discussant Alan Blinder’s equally forthright (and witty) defense of the Fed. (The session’s tone is nicely captured in the reports of Sudeep Reddy from the Wall Street Journal and Bloomberg’s John Fraher and Scott Lanman.) [Broken link to the Bloomberg article fixed.]
Though Professor Buiter’s (of the London School of Economics and Political Science) assertions were certainly provocative, for me the truly thought-provoking aspect of the symposium was the collective effort to address some deep questions that still seek answers. There are a lot of them, but here are a few at the top of my list.
How do you know “loose” monetary policy when you see it?
Charles Calomiris—whose paper received attention at Free exchange, and is summarized by the author himself at Vox—says it is the real (or inflation-expectations adjusted) federal funds rate. I suspect that conforms to the definition favored by many, but the significance of defining the stance of monetary policy in this way was hammered home by Tobias Adrian and Hyun Song Shin:
…some key tenets of current central bank thinking [have] emphasized the importance of managing expectations of future short rates, rather than the current level of the target rate per se. In contrast, our results suggest that the target rate itself matters for the real economy through its role in the supply of credit and funding conditions in the capital market.
There is a fair amount at stake in understanding which of these views is correct:
Are longer-term interest rates relatively high while the real federal funds rate is so low because policy is quite loose and inflation expectations are rising? Or are the elevated long-term rates a sign that policy is really restrictive? Or is the picture just a symptom of a combination of currently weak returns to capital (keeping short-term rates low), the prospect of better times ahead (and higher long-run returns to capital), and a return to more realistic pricing of risk, all of which would be consistent with the proposition current policy is neither too easy nor too tight? The question is not academic.
Is there crisis after subprime?
A primary component of Calomiris’ thesis is (in the words of his Vox piece) “loose monetary policy, which generated a global saving glut.” That global saving glut connection would come up again, most prominently in MIT professor Bengt Holmstrom’s discussion of the contribution by Gary Gorton (itself an essential read if you have any questions at all about the way subprime markets work, or what they have to do with SIVs, CDOs, and ABXs). The starting point of Holmstrom’s argument—a variation on earlier themes from Ben Bernanke (for the general audience) and Ricardo Caballero, Emmanuel Farhi, and Pierre-Olivier Gourinchas (for the economists in the crowd)—goes something like this: Surplus saving in emerging economies has driven up the demand for liquid assets. Liquidity being a specialty of the United States in particular, the excess demand drives down interest rates here, stimulates spending, and expands deficits on the country’s current account.
The story, I believe, goes back to the late 1990s. One important difference between then and now is that the liquid assets most in demand at the close of the past decade were highly concentrated in long-term Treasury securities. Another is the fact that the related private-asset appreciation in the late 1990s was manifested in equity markets. After the tech-stock bust, however, the fundamental global imbalances remained and found a new home in debt created by the subprime housing market. As Professor Holmstrom and others noted, collateralized debt markets, based as they are on leverage and low levels of information flows, are much more complicated animals than equity markets.
The question that remains is obvious. What is there to stop the next crisis if global imbalances persist? And if they do, is “better” monetary policy—whatever that might be —a sufficient condition for avoiding future problems?
Which leads me to…
Is there a better way to prepare for future bouts of financial market turmoil?
One answer—shared by many at the symposium—is that we can do so imperfectly at best, and that ultimately governments or markets or both just have to clean up the mess afterward. That approach feels a bit costly at the moment, so it seems a prudent thing to explore proactive measures that may at least mitigate the impact when problems arise. On this front, the biggest buzz of the symposium was probably generated by Anil Kashyap, Jerome Stein, and Raghuram Rajan, who proposed the development of “insurance policies” that would infuse the banking sector with fresh capital when they need it most. I’ve run on too long now, so for further elaboration I will refer you again to Sudeep Reddy – or to the paper itself.
A related reading PS: You will find more on the Chairman’s speech at Free exchange, at Calculated Risk (here and here), and at the William J. Polley blog. On Professor Buiter’s session you will find no shortage of commentary. The Daily Reckoning, naked capitalism, Economic Policy Journal, and Equity Check provide what I am sure is just a sampling.
TrackBack URL for this entry:
Listed below are links to blogs that reference Deep questions from Jackson Hole:
August 21, 2008
The “What’s Fair” contest
At Café Hayek, George Mason’s Russell Roberts opens up a brand new “Inequality Chart Contest.” The chart in question is based on work by Thomas Piketty (professor, Paris School of Economics) and Emmanuel Saez (professor, University of California Berkeley), the essence of which is that the rich have gotten richer and everyone else not so much. (You can find a link to the Piketty-Saez paper, as well as updated data and executive summaries, on Emmanuel Saez’ homepage. Russell links to more information from the Center on Budget and Policy Priorities.)
Here’s the picture…
… and the contest is to construct “ONE sentence explaining ONE thing that is wrong with concluding that these numbers are evidence that the U.S. economy has become more tilted toward the rich at the expense of the poor.”
In the spirit of prompting reflection on issues of inequality and fairness, I invite you to think about the following three pictures, generated from Internal Revenue Service (IRS) tax data through 2006:
Let’s focus on the 1 percent of income-earners (by IRS defined Adjusted Gross Income, or AGI). If you look at the average federal tax rate paid by this group—that is, taxes paid divided by AGI—it did fall substantially over the period from 2000–2006. The average tax rates for other income groups fell as well, but not as dramatically.
If you instead prefer to look at taxes paid, the share the top 1 percent forked over to the federal government rose from 37.4 percent in 2000 to 39.9 percent in 2006. The share paid by the next highest 4 percent rose only slightly over this period, and the share paid by all other groups actually fell or stayed roughly the same.
On the other hand, concentrating on the share of taxes paid relative to the share of income earned by each group would lead you to the conclusion that not much had changed between the year 2000 and 2006.
So here’s the contest: Explain in one sentence which one of those pictures tells us whether the federal income-tax system has become more or less “fair.”
TrackBack URL for this entry:
Listed below are links to blogs that reference The “What’s Fair” contest:
August 19, 2008
Did the stimulus package actually stimulate?
One of the big questions of the policy season is surely “Did the $100 billion of tax rebates distributed to households in May, June, and July actually work?” “Work” in this case means “stimulate consumer spending.” You may want to sit down before I tell you this, but so far economists disagree. In one corner you have Christian Broda at the University of Chicago and Jonathan Parker at Northwestern University:
The Economic Stimulus Act of 2008 was aimed at increasing disposable income temporarily through tax rebates in the hope this would stimulate spending and end or at least mitigate the severity of a U.S. economic slowdown. We find that to a significant extent they succeeded. The stimulus payments are initially being spent at significant rates. These rates are slightly higher than those observed in 2001 when fiscal policy has been credited with helping end the 2001 recession.
Although press stories emphasizing that the rebates induced additional consumer spending were technically correct, they missed the important point that the spending rise was very small in comparison to the size of the tax rebates.
A recent, widely reported academic study by Christian Broda and Jonathan Parker showing that the rebates led to increased spending on nondurable items (like food and drugs) does not contradict the implication of the more comprehensive data—on national retail sales and total consumer spending—that the induced rise in consumer outlays was small relative to the size of the rebate.
Oh boy. Let’s back up a step. Before the fact, here is what people said they were planning to do with their rebates (by at least one report):
So what did the people receiving the rebates do with them? Well, if we could answer that one, it would be easy to resolve the Feldstein vs. Broda-Parker dispute. It does seem undeniable that a pretty good piece of those rebates was saved, at least in the first two months.
Can those elevated saving rates recorded in May and June reflect an outbreak of thriftiness? The real answer is “who knows?” but we can do a little back-of-the-envelope arithmetic to put things in perspective. Ignoring the Katrina-related dip in August 2005, the average saving rate from the beginning of 2005 through this past April was about 0.61 percent.
So, here’s the question: Assuming that consumers saved out of nonrebate income at the rate of 0.61 percent, how much would they have had to save out of the sums distributed in May and June to raise the overall saving rates to the observed values of 4.9 and 2.5 percent?
If you do the annualized calculation for the $43 billion of rebates in May and $28 billion in June you get some pretty striking numbers: An implied saving rate out of the rebates of somewhere in the neighborhood of 83 percent in May and 63 percent in June.
You can argue that there is a sense in which even these figures are understated. Durable goods purchases, for example, are theoretically a form of household saving, and the Broda-Parker survey respondents did indicate that about 20 percent of their rebates went toward the purchase of durables. However, if that is so durable expenditures without the tax rebates would have been really low. Though expenditures on durables grew at an annualized rate of 5.8 percent in May—not bad—they shrank by 17.4 percent in June.
These back-of-the-envelope calculations are pretty rough, of course, but they are broadly consistent with evidence from the 2001 tax rebates. That evidence also suggests that about one-third of the rebates were spent in the quarter following their disbursement, so the spending effects of this year’s model may yet have legs.
On the other hand, even if the rebates do prop up consumer spending in the short run, that would hardly settle the debate about whether they were the best way to spend $100 billion. But that’s a different debate for a different time.
TrackBack URL for this entry:
Listed below are links to blogs that reference Did the stimulus package actually stimulate?:
» Failure of the Stimulas Package from Newshoggers.com
By Fester: The point of a short term stimulus package is to encourage people to spend money. The reason is to kick-start demand because the problem is primarily seen as a short term crisis of confidence. An effective stimulus package [Read More]
Tracked on Aug 21, 2008 7:44:56 AM
August 14, 2008
What the Fed did during macroblog's vacation
To state the very obvious, it has been quite an eventful twelve months since I last committed fingers to laptop. I might well have titled this post "Four Fed programs that did not exist one year ago." Over the four months from December to March, the Federal Reserve Board of Governors and the Federal Open Market Committee, or FOMC, introduced an alphabet soup of new lending programs to address acute stress in financial markets, some of which required the invocation of emergency powers based on "unusual and exigent circumstances."
I know that in some quarters—maybe the one where you reside—all this activity had a certain frenetic, whack-a-mole feel to it. But I think it appropriate to view the Fed's actions over this period as what I believe them to be: A measured and logical sequence of steps to address very specific liquidity distress in financial markets.
If I had to choose one picture to describe the crux of the "liquidity" problems to which I am referring it would be this one:
In effect, the OIS (overnight index swap) yield is a measure of the rate that banks charge one another for overnight loans and the LIBOR (London Inter Bank Offered Rate) yields represent the rate charged for slightly longer-term (30- and 90-day) lending. The explosion in this spread in August 2007 was the marker for the emergence of a severe disruption in the means by which lending institutions typically finance their ongoing operations.
A brief chronology, then:
August 17, 2007: The Board of Governors cuts the primary credit rate (or discount rate), the interest rate Federal Reserve Banks charge on direct loans made to banks.
September 18, 2007: The FOMC cuts its target for the federal funds rate, the first in a string of seven consecutive rate reductions.
December 12, 2007: The Board of Governors introduces the Term Auction Facility (or TAF), initially a mechanism for providing loans to banks for a period of 28 days (as opposed to the typical overnight maturity associated with standard primary credit loans). Last week, the Board announced the program would be extended to make loans available for a term of 84 days.
March 7, 2007: The FOMC authorizes the New York Fed to conduct open market operations using Term Repurchase Agreements. Like the TAF, the term repo program allowed the Fed the flexibility to conduct operations over periods of about a month rather than the overnight basis that is typical in more normal environments.
March 11, 2008: The FOMC approves the creation of the Term Security Lending Facility (TSLF), which authorized swapping Treasury Securities (over a period of 28 days) for "other securities, including federal agency debt, federal agency residential-mortgage-backed securities (MBS), and non-agency AAA/Aaa-rated private-label residential MBS."
March 16: The Board of Governors creates the Primary Dealer Credit Facility (PDCF), authorizing direct loans to broker dealers who are authorized to engage in securities transactions with the Federal Reserve.
What do I want you to see? As I noted above, I see a progression of logically consistent steps that neither lurched to extreme solutions nor ignored the imperatives of the problem at hand:
- The first step was to invoke the usual tools of monetary policy (in the form of discount window lending and federal funds rate adjustments).
- Then it became obvious that injecting liquidity into overnight markets alone was not solving the problem of funding being unavailable for periods of time even as short as one to three months. The next step, then, was to lengthen the maturity of loans and asset exchanges in policy operations (in the form of the TAF and Term Repurchase Agreements). (An additional salutary effect of the TAF was apparently the lack of "stigma" that is thought to be attached to borrowing from the discount window.)
- From there, it became clear that Treasury securities were rapidly emerging as the only widely accepted form of collateral to support short-term borrowing and lending, a function that securities backed by real estate assets were simply unable to perform. Some relief to this problem was already inherent in the form of the broader-than-Treasuries collateral options in the TAF. Further relief was provided by the TSLF, which in effect implemented a swap of in-demand Treasury securities from the Federal Reserve's balance sheet for less liquid mortgage-backed assets.
- Finally, the potential systemic consequences of acute stress in the primary dealer network led us to the PDCF, in effect broadening the class of institutions to which the central bank would stand ready to infuse short-term liquidity.
Once again, in my view there is a methodical progression to the whole process that is too commonly overlooked: Start with the standard tools (the discount rate and federal funds rate), move on to a lengthening of the maturity in the term of those standard tools (TAF and Term Repurchase Agreements), on to a broadening of the collateral used to support monetary policy operations (TSLF), and finally expanding the class of institutions to which the Federal Reserve will lend (PDCF).
It is not entirely obvious that the new long-run level of the OIS-Libor spreads pictured above will once again converge to the values that prevailed prior to August 2007, but I would argue that the still-elevated levels of these spreads implies we have a ways to go before financial markets are again fully functional. Though the lending programs put in place in the past year have not been, and could not be, a magic elixir for solving all financial market woes, I would take the bet that they are least providing enough stability for the market to continue the painful process of healing itself. Getting to this point has not always been pretty in real time, and there is plenty of room for debate about the long-run costs and benefits of each step along the way. But given a little time for perspective I believe we will find a certain beauty to it all.
TrackBack URL for this entry:
Listed below are links to blogs that reference What the Fed did during macroblog's vacation:
August 12, 2008
With this posting, I’m pleased to announce the return of macroblog, which has been on hiatus since I came to the Federal Reserve Bank of Atlanta as its research director in August of last year.
I originally launched macroblog in 2004 as an independent blog, but it will now be run through the Atlanta Fed on our Web site. Macroblog will feature commentary by me as well as other members of the Bank’s research department. The purpose of the blog is to help inform readers with commentary and observations on a variety of current economic topics, including monetary policy, macroeconomic developments, financial issues, and Southeast regional trends. I do need to emphasize that the views expressed in macroblog will not necessarily be those of the Atlanta Fed or the Federal Reserve System – feel free to quote me on that.
A few logistics: Postings to macroblog will be made on Tuesdays and Thursdays. Though we will continue to post articles during the Federal Open Market Committee (FOMC) blackout period (which runs from the week before the FOMC meeting until the Friday after), we will not be commenting on monetary policy during that period. In addition, I will not personally post content during the blackout period.
We view macroblog as a venue for economic discussion, and to facilitate that discussion we will provide the opportunity for you to post comments. However, please be aware that you will need to follow standards that we have established for the blog and that we will not routinely respond to comments. A link to the comment standards can be found under the About section on the main page.
I invite you to bookmark macroblog and return for the next posting on Thursday, August 14. We hope that you find macroblog to be an informative addition to your economic reading.
TrackBack URL for this entry:
Listed below are links to blogs that reference macroblog returns:
Tracked on Aug 12, 2008 4:51:15 PM
» Back again from New Economist
The very welcome return of Dave Altig's Macroblog last week has prompted me to consider posting again too. Apologies for the protracted absence. My non-virtual life has been rather hectic in recent months. Now that things have settled down a little, I ... [Read More]
Tracked on Aug 20, 2008 10:57:15 PM