June 01, 2011
Should we even read the monthly inflation report? Maybe not. Then again...
In a recent issue of Economic Synopsis, our colleague Dan Thornton of the St. Louis Fed questions the usefulness of the traditional core inflation statistics—the consumer price index (CPI), or the personal consumption expenditure price index that strips out food and energy costs. Specifically, Dan asks whether the core inflation statistic is a better predictor of future inflation over the medium term (say, the next two or three years), than the headline inflation statistic. His conclusion is that:
"[F]or the most recent period, there is no compelling evidence that core inflation is a better predictor of future headline inflation over the medium term."
But Dan also invites the following:
"[I]n the interest of greater transparency and to allow the public to better understand its focus on core measures, the FOMC [Federal Open Market Committee] should provide evidence of the superior forecasting performance of the core measure it uses."
Well, of course neither writer of this blog post is on the FOMC, and equally obvious is the fact that we don't speak for anyone who is. Moreover, we're not very big fans of the traditional core measures, and we much prefer trimmed-mean estimators of inflation when thinking about recent price behavior.
Nevertheless, we'd like to attempt an answer to Dan's call, even if it wasn't aimed at us.
Here's the experiment run by Dan: He used the past 36-month trend in the traditional core inflation measure and the ordinary headline inflation measure and tested which one most accurately predicted the next 36 months of headline inflation. He found that they're about the same. A similar look at 24-month trends yielded a similar result.
The upshot of these experiments can be seen in the figure below (which is a figure of our construction, not his).
The chart shows how accurately we can predict headline CPI inflation over the next three years using only headline CPI price data or, alternatively, using only core CPI price data. The essence of the conclusion reached in the Economic Synopsis is summarized within the shaded box. The forecast accuracy of the two- and three-year trends of the core CPI price measure doesn't seem to be a significant improvement to the plain-vanilla headline CPI.
But we wonder whether the contribution of the core inflation statistic is being accurately reflected in this experiment. For us, the power of a core inflation measure—whether it be the traditional ex-food and energy measure, or some more statistical construct like the trimmed-mean estimators—can't be seen by comparing data trends of this sort. The volatility of an inflation statistic, what we would characterize as "noise," dissipates rather quickly, generally within a few months (although for food and energy, it could play out over a longer period of time, we understand).
At issue is how much the most recent month's or quarter's inflation data should inform one's thinking about the future path of inflation. Implicit in the experiments reported above is that they shouldn't—well, only as much as the most recent monthly or quarterly data influence the trend of the past two or three years.
It may be that the most recent monthly or even quarterly data are so noisy that they have nothing useful to contribute to our perception of the future inflation trend. But then again, an experiment that assumes there is no useful information in the most recent inflation data does not necessarily make it so.
We'd like to call your attention to the remainder of the figure above, where we ask the question, what happens if you try to predict headline CPI inflation over the next three years using only the most recent price data? For example, what if we restrict ourselves to looking only at the most recent month's CPI report? What we see is that the core inflation statistic provides a much improved prediction of the future inflation trend compared to the headline measure. Specifically, forecast accuracy is improved by nearly 50 percent if you use the core inflation measure. (For you wonks, the root mean square error, or RMSE, of the core CPI prediction is about 1.4 percent, compared with a RMSE of 2.7 percent for headline CPI inflation.)
Now consider the behavior of CPI prices over the past three months. How informative of the future inflation trend are these prices? Well, the accuracy of the headline inflation statistic improves relative to the one-month percent change because averaging the data over time in this way necessarily reduces the transitory fluctuations in the data. But again, the three-month core CPI price statistic provides a much better prediction of future headline inflation than does the three-month trend in the ordinary CPI statistic. In other words, if you're wondering what the past-three months of data tell you about developing inflation pressure, you're much better off considering the core statistic than you are the headline number.
Here's another observation we'd like to make: The most recent three-month trend in the core CPI inflation measure appears to be a more accurate predictor of future inflation than the 12-month headline CPI trend. Moreover, the three-month trend in the core measure is roughly as accurate as its longer-term trends. This observation suggests that paying attention to the core measure may allow you to spot changes in the inflation trend much more quickly than using headline alone.
Again, to be clear, we aren't endorsing the core inflation statistic. We're fans of trimmed-mean estimators and think they do an even better job of informing thinking about what the most recent price data tell us about the likely future path of inflation. (As evidence, we included in the chart above the same forecasting results for the median CPI.) We only want to make one simple point—the usefulness of a core inflation measure is best seen in the monthly and quarterly intervals that span FOMC meetings, not in the two- or three-year trends which are, by construction, largely silent about the most recent data.
TrackBack URL for this entry:
Listed below are links to blogs that reference Should we even read the monthly inflation report? Maybe not. Then again...:
May 13, 2011
Just how out of line are house prices?
In Wednesday's post, I referenced commentary from several bloggers regarding the sizeable decline in housing prices reported by Zillow earlier this week. As I discussed yesterday, the rat-through-the-snake process of working down existing and prospective distressed properties is likely far from over, and how that process plays out will no doubt have an impact on how much prices will ultimately adjust.
Recently, Barry Ritholtz's The Big Picture blog featured an update of a New York Times chart that suggests there will be a significant adjustment going forward:
Prior to the crisis, I was persistently advised that the better way to think about the "right" home price is to focus on price-rent ratios, because rents reflect the fundamental flow of implicit or explicit income generated by a housing asset. In retrospect that advice looks pretty good, so I am inclined to think in those terms today. A simple back-of-the envelope calculation for this ratio—essentially comparing the path of the S&P/Case-Shiller composite price index for 20 metropolitan regions to the time path of the rent of primary residences in the consumer price index—tells a somewhat different story than the New York Times chart used in the aforementioned Ritholtz blog post:
According to this calculation, current prices have nearly returned to levels relative to rents that prevailed in the decade prior to the housing boom that began in the late 1990s.
Of course, the price-rent ratio is not the most sophisticated of calculations. David Leonhardt shows the results from other such calculations that suggest prices relative to rents are still elevated, at least relative to the average that prevailed in the 1990s. But the adjustment that would be required to bring current levels back into line with the precrisis average is still much lower than suggested by the Ritholtz graph.
How much farther prices fall is, I think, critical in the determination of how the economy will fare in the immediate future. Again, from President Lockhart:
"The housing sector also has indirect impacts on the economy. In particular, the direction of home prices is important for the economy because changes in home prices affect the health of both household and bank balance sheets. …
"The indirect influence of the housing sector on consumer activity and bank lending would almost certainly aggravate housing's impact on growth."
Here's hoping my chart is more predictive of housing prices than the alternative.
Update: The Calculated Risk blog does a thorough job and concludes that we don't have "to choose between real prices and price-to-rent graphs to ask 'how far out of line are house prices?' I think they are both showing that prices are not far above the historical lows."
Update: The Big Picture's Barry Ritholtz points me to his earlier argument against reliance on price-rent ratios.
By Dave Altig
senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Just how out of line are house prices?:
May 11, 2011
Is housing hurting the recovery?
Though the week is only half over, I'm going to nominate Stan Humphries and Zillow as bearers of the week's most distressing economic news:
"Home values fell three percent in the first quarter of this year, marking a pace of decline not seen since 2008 when the housing recession was at its worst. Home values fell one percent between February and March and 8.2 percent from March 2010."
"Previously, we anticipated a bottom in home values by the end of 2011. But with values falling by about 1 percent per month so far, it's unlikely that will happen. We now believe a bottom will come in 2012, at the earliest."
At The Curious Capitalist, on the other hand, Stephen Gandel says he's not so sure:
"To be sure, housing prices have fallen this year. But the Zillow numbers out today make the housing market look worse than it is. The problem is with how Zillow tracks home prices. Unlike other measures of the housing market, Zillow's numbers are not based on actual sales, but on estimates of what its model thinks your house, along with every other house in America is worth. Zillow's model is similar to how an appraiser figures out what your house is worth. It looks at past sales of houses that are similar to yours and then guesses what your house is worth. But by the time those sales are fed into Zillow's system they are months old. … If the housing market is turning, Zillow is going to miss it."
Is the housing market turning, particularly with respect to prices? Tough to say. If you want your glass half full, these words from the New York Fed's Liberty Street Economics might be the tonic for your tastes:
"This post gives our summary of the 2011:Q1 Quarterly Report on Household Debt and Credit, released today by the New York Fed. The report shows signs of healing in household balance sheets in the United States and the region, as measured by consumer debt levels, delinquency rates, foreclosure starts, and bankruptcies…
"Delinquency rates are generally down…
"New foreclosures fell nationally and in the region. About 368,000 individuals in the United States had a foreclosure notation added to their credit report between December 31 and March 31, a 17.7 percent decrease from the 2010:Q4 level. New foreclosure rates fell from 0.19 percent to 0.15 percent for all individuals nationwide…"
What may be the most important aspect of the report is highlighted by the Financial Times's Robin Harding: "…fewer new mortgages going bad, and some bad mortgages getting better." In fact, for the first time since the crisis began, the percentage of mortgages transitioning from 30 to 90 days delinquent to current exceeds the percentage transitioning to seriously delinquent (90-plus days).
There is, of course, plenty of material for the housing-price bears. For example, the flow of seriously delinquent mortgages is quite elevated.
According to estimates from CoreLogic, the supply of "distressed" homes is greater than 15 months at the current pace of sales:
"Most analysts now expect that the housing market won't bottom out until sometime next year. Until that happens, it's unlikely that that the sluggish economic recovery we're seeing right now will improve much."
The view here at the Atlanta Fed—and the answer to the question posed in the title of this post—was provided earlier today by our president, Dennis Lockhart, in a speech given to the Atlanta Council for Quality Growth:
"…can we have high-quality growth while the residential real estate and commercial real estate sectors continue to be so weak? Not completely, in my opinion. The recovery will progress, but it will not be robust until we work through the economy's serious imbalances, including those in the real estate sector.
"As I look ahead, I think the most reasonable assumption is that improvement of the real estate sector will lag an otherwise improving economy. But I am encouraged by the fact that the economy is increasingly on firmer footing."
I will let you decide whether that glass is half-empty or half-full.
By Dave Altig
senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Is housing hurting the recovery?:
January 04, 2011
Looking back, looking forward
Kicking off the new year, the latest edition of the Atlanta Fed's EconSouth magazinecontains our annual review of the year past and our bravest guess about the one to come (articles in this issue include outlooks for the national, international and Southeast economies and features on small business and other topics). If we are looking for enduring lessons about the national economy from the previous year, I nominate the time-tested but oft-ignored advice to be wary of reading too much into short-term economic ups and downs:
"Better-than-expected increases in several economic indicators in the spring led many economists to revise up their growth estimates. A quick snap-back in the economy, as has been typical in most other deep recessions in the post–World War II era, seemed a distinct possibility.
"However, such a snap-back was not to be. It is now clear that some of the rebound in growth stemmed from a rebuilding of depleted inventories in the first quarter and the waning influence of various government spending programs. By summer, the incoming economic data had weakened considerably, and the pace of expansion in the major expenditure categories raised the specter of a step backward into contraction…
"Bumpy growth for an economy transitioning out of a recession is not unusual. For example, GDP [gross domestic product] jumped by 3.5 percent in the quarter immediately following the end of the 2001 recession, but it then slowed to just 0.1 percent three quarters later. To date, that pattern of growth proceeding in fits and starts has certainly been representative of this recovery."
In fact, it now appears that the U.S. economy grew in 2010 by somewhere in the range of 2.5 percent to 3 percent, just where the Blue Chip consensus was at the beginning of the year (and, incidentally, somewhat better than what we at the Atlanta Fed were expecting). Still…
"Despite these improvements, economic performance has been somewhat disappointing. The recovery has not been strong enough to meaningfully reduce the unemployment rate. Throughout the year, the unemployment rate has remained well above 9 percent. Income growth (excluding transfer payments made by the government) has been weak—up less than 1 percent for the year on an inflation-adjusted basis. The housing market is struggling in the face of continuing foreclosures despite a variety of tax incentives and historically low mortgage rates, and the commercial real estate sector likewise has not recovered. This theme of improvement in some areas and ongoing weakness in others illustrates the unevenness of the recovery and more uncertainty than normal about future economic prospects."
Will 2011 be a different story? Quantitatively, probably yes—growth should take another step up this year. But the story, we think, remains essentially the same:
"The incoming data as well as reports from the Atlanta Fed's business contacts are broadly consistent with a relatively restrained growth trajectory. There are, in fact, several factors that will plausibly inhibit the pace of the expansion. Weakness in residential and commercial real estate is ongoing. Business and consumer attitudes are still extremely cautious, and slow spending growth by businesses and households is continuing to hold back inflation. Over the near term, additional business spending appears likely to be geared primarily toward activities such as targeted mergers and acquisition and further increases in efficiency rather than toward pure expansion. Slow and uneven sales, opportunities to reduce costs through increased productivity, structural adjustments in labor markets, and uncertainty over government policy—including changes in labor and environmental rules, tax policy, and financial regulations—are restraining job creation. Slow job growth, naturally, implies that unemployment could remain elevated for some time."
"Of course, risks lurk on both the upside and downside for the outlook, but there are reasons for optimism. Financial firms and households have made significant headway in repairing their severely compromised balance sheets, and most are in a much better financial position than they were a year-ago. Businesses in particular have substantially more liquidity and significant capacity to deploy capital to new projects. Some of the uncertainties that have vexed private decision makers, such as the course of near-term tax policy, may finally be abating…
"Recent surprises in the economic indicators have been predominantly to the upside, which is a very good sign. If such positive surprises persist, and confidence in the economic environment grows, it could be that current estimates for only slight improvement in 2011 have been too modest."
Note: For more perspective on the 2010 economic outlook and monetary policy, stay tuned for Atlanta Fed President Dennis Lockhart's speech to the Rotary Club of Atlanta, scheduled for Monday, January 10. The text will be posted on the Bank's website.
By Dave Altig
Senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Looking back, looking forward:
October 07, 2010
Using TIPS to gauge deflation expectations
In the recent Survey of Professional Forecasters, economists were asked to give their subjective probability of deflation during the next year. Specifically, they were asked about the chances that the quarterly consumer price index excluding food and energy (core CPI) will decline in 2011. According to the respondents, the probability of core CPI deflation in 2011 was only 2 percent.
This rather sanguine view of the probability of deflation is encouraging. But is it a view shared by noneconomists? While there are many sources used to measure inflation expectations, there aren't many that gauge inflation uncertainty or the risk of deflation. However, one might estimate a probability of deflation as seen by investors by exploiting the different deflation safeguards of a pair of Treasury Inflation Protected Securities (TIPS), which have about the same maturity date but different issue dates.
Here's the idea: A TIPS cannot pay less than its face value at maturity, so the principal repayment of a five-year TIPS issued today is not reduced if the five-year rate of inflation is negative over the life of the security. But a 10-year TIPS issued five years ago will have its capital gain from accrued inflation reduced if there is a net decline in the CPI over the next five years. As a result, part of the real yield spread between the 10-year and five-year TIPS issues should reflect the value of the better deflation safeguard of the latter security.
In a comment on a paper by Campbell, Shiller, and Viceira, Jonathan Wright derives a very simple formula for calculating a lower bound on the probability of deflation using this real yield spread. (The lower-bound formula is rm/ln(CPI5yr/CPI10yr), where r is the yield spread between the 10-year and five-year TIPS real yields, m is the number of years until the midpoint of the maturity dates of the two TIPS, and CPI5yr/CPI10yr are the levels of the NSA CPI on the issue dates of the five-year and 10-year year TIPS. These reference CPIs are available here. Deflation is defined as the level of the CPI being lower than its value on the issue date of the five-year TIPS.) Wright's calculation makes a number of simplifying assumptions, some of which are counterfactual, but it is easy to compute—almost literally a back-of-the-envelope calculation if you have two real TIPS yields in hand. The formula also has the advantage that it does not require any assumptions about the probability distribution of inflation.
To get exact probabilities of deflation instead of a lower bound, I developed a simple model for TIPS pricing. The model is an extension of the TIPS pricing model developed by Brian Sack. One has to make a lot of assumptions to derive these estimates—which you can read about in the appendix to this post (link provided in last paragraph)—but let's get to the main results. The figure below plots the probability that the level of the reference CPI on April 15, 2015, is lower than its April 15, 2010, level. (The reference CPI is the nonseasonally adjusted consumer price index interpolated to a daily frequency; it is calculated by taking a weighted average of the CPI two months ago and three months ago.) If the April 2015 reference CPI ended up below this threshold, then the deflation safeguard for the five-year TIPS would kick in. Also included in the graph is the lower bound of this "deflation probability" calculated using Wright's formula.
An alternative way of generating deflation probabilities is to exploit the estimated "confidence interval" from a forecasting model of inflation. When I use a variant of the inflation model proposed by Stock and Watson (for those interested in more detail, the model I am using is the Stock-Watson unobserved components with stochastic volatility, or UC-SV model), it says there is about a 10 percent chance that average CPI inflation over the next five years will be below zero.
Is this the last word on estimating deflation probability? Of course not; there are more than a few pitfalls in this method of calculating a deflation probability, some of which are described in the aforementioned technical appendix posted on the Atlanta Fed's Inflation Project. But this approach does have the advantage of exploiting information from market prices on traded securities. As such, it may prove a valuable addition to our toolkit of indicators. Consequently, we intend to update these estimates and post them on the Inflation Project web page every Thursday afternoon.
By Patrick Higgins, an economist in the Atlanta Fed's research department
TrackBack URL for this entry:
Listed below are links to blogs that reference Using TIPS to gauge deflation expectations:
August 03, 2010
What makes forecasting tough
Bloomberg's Caroline Baum recounts her recent conversation with the Atlanta Fed's own Mike Bryan under the headline For Good Economic Forecasts, Try Flipping a Coin:
"How do economists fare when it comes to real forecasting, to predicting [gross domestic product] GDP growth and inflation one year out? About as good as a coin toss, according to Bryan's research. Less than half the economists did better than the naive forecast, which is based on no understanding of the economy and merely assumes next year's outcome will be the same as this year's. It's what you'd expect if the results were purely random."
A case in point could be found yesterday on Bloomberg, which featured a "chart of the day" that looked something like the one below (though I've updated the data for manufacturing inventories, given today's factory orders report):
The chart was accompanied by this commentary:
"U.S. business inventories are so low relative to demand that any increase may act as a catalyst for larger companies to add workers, according to Nicholas Colas, chief market strategist at BNY ConvergEx Group."
A few days back, in The Wall Street Journal, you could find this:
"Until recently, businesses had helped supercharge economic growth by restocking inventories. Now the oomph from inventories is waning.
"In the second quarter, the change in private inventories added slightly more than one percentage point to the 2.4% increase in gross domestic product from the first quarter, measured at a seasonally adjusted annual rate, the Commerce Department said Friday.
"That is a big change from the first quarter, when inventory-building contributed 2.6 percentage points to GDP growth of 3.7%, and the fourth quarter of last year, when it contributed 2.8 percentage points to GDP growth of 5%....
"But Friday's report suggests companies are nearly done restocking their shelves.
" 'Our sense is current inventories are about where they need to be globally, both in industrial distribution and with the large North American retailers,' John Lundgren, chief executive of Stanley Black & Decker Inc., said in a July 21 call with analysts discussing the tool and hardware maker's second-quarter results."
But, on the same topic, Seeking Alpha opined:
"Inventory increases added 1.05% to second quarter GDP. Based on the annual revision, they added 2.64% to first quarter GDP or 71% of the total increase. Inventories were also responsible for approximately two-thirds of the GDP increase in the fourth quarter of 2009. The entire economic 'recovery' has essentially been an inventory adjustment [emphasis theirs]. This does not bode well for the future."
So one analysis suggests that the latest readings on inventories portend a boost to GDP, one foresees a drag on GDP, and yet another divines that inventories are basically played out as an economic story for the balance of the year.
Again from the Baum piece:
"Bryan said it's not just about getting the number right. 'It's about the narrative.' "
For comparison, it's also useful to take a longer look at what effect inventories have on GDP growth coming out of a recession; see the graph below. It charts the percentage point contributions of various components to real GDP growth in the first four quarters following the end of a recession (the current recession is assumed to have ended in second quarter of 2009). I've shown on the graph the percentage contribution of inventories to the last seven recoveries, beginning with the one in 1971.
Regarding the point made in Seeking Alpha, inventories have contributed around 70 percent to the economic recovery recently, but in the recovery that began in 2002 inventories contributed 75 percent in the first four quarters. So the last two recovery periods stand out for large inventory components. But looking across the data, it's hard to say what an ordinary inventory contribution would be. Regardless of whether inventories are an unusually large part of this recovery, in absolute levels the scale of the recent inventory cycle—the initial liquidation and the subsequent restocking—has been unprecedented.
By Andrew Flowers, senior economic research analyst in the Atlanta Fed's research department
TrackBack URL for this entry:
Listed below are links to blogs that reference What makes forecasting tough:
June 30, 2010
Keeping an eye on Europe
In June, a third of the economists in the Blue Chip panel of economic forecasters indicated that they had lowered their growth forecast over the next 18 months as a consequence of Europe's debt crisis. When pushed a little further, 31 percent said that weaker exports would be the channel through which this problem would hinder growth, while 69 percent thought that "tighter financial conditions" would be the channel through which debt problems in Europe could hit U.S. shores.
Tighter financial conditions also were mentioned by the Federal Open Market Committee in its last statement, where the committee noted, "Financial conditions have become less supportive of economic growth on balance, largely reflecting developments abroad."
In his speech today, Atlanta Fed President Dennis Lockhart identified the European sovereign debt crisis as one of the sources of uncertainty for the U.S. economy that he believes "have clouded the outlook." President Lockhart explicitly expressed his concern that Europe's "continuing and possibly escalating financial market pressures will be transmitted through interconnected banking and capital markets to our economy."
Negative effects from the European sovereign debt crisis can be transmitted to the U.S. economy through a number of financial channels, including higher risk premiums on private securities, a considerable rise in uncertainty, and sharply increased risk aversion. Another important channel is the direct exposure of the U.S. banking sector—both through holdings of troubled European assets and counterparty exposure to European banks, which not only have a substantial exposure to the debt-laden European countries but have also been facing higher funding costs. The LIBOR-OIS spread has widened notably (see the chart below), liquidity is now concentrated in tenors of one week and shorter, and the market has become notably tiered.
Banks in the most affected countries (Greece, Portugal, Ireland, Spain, and Italy) and other European banks perceived as having a sizeable exposure to those countries have to pay higher rates and borrow at shorter tenors. Although for now U.S. banks can raise funds more cheaply than many European financial institutions, some analysts believe that there's a risk that the short-term offshore dollar market may become increasingly strained, leading to funding shortages and, conceivably, forced asset sales.
Bank for International Settlements data through the end of December of last year show that the U.S. banking system's risk exposure to the most vulnerable EU countries appears to be manageable. U.S. banks' on-balance sheet financial claims vis-á-vis those countries, adjusted for guarantees and collateral, look substantial in absolute terms but are rather small relative to the size of U.S. banks' total financial assets (see the chart below). The exposure to Spain is the biggest, closely followed by Ireland and Italy. Overall, the five countries account for less than 2 percent of U.S. banks' assets.
U.S. exposure to developed Europe as a whole, however, is much higher at $1.2 trillion, so U.S. financial institutions may feel some pain if the European economy slows down markedly. How likely is a marked slowdown? It's difficult to determine, of course, but when asked about the largest risks facing the U.S. economy over the next year, the Blue Chip forecasters put "spillover effects of Europe's debt crisis" at the top of their list.
By Galina Alexeenko, economic policy analyst at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Keeping an eye on Europe:
June 18, 2010
Another look at consumer sentiment and consumer spending
In the most recent economic forecasting survey by the Wall Street Journal, 23 percent of the surveyed economists said consumers spending more readily than anticipated is the biggest upside risk of their growth forecast for the second half of the year. So anything that can shed light on future spending habits is of particular interest. Two of the most commonly cited measures of consumer attitudes are the Conference Board's Consumer Confidence Index and the Thomson Reuters/University of Michigan's Index of Consumer Sentiment. A key question is, do these indicators improve consumption forecasts?
Previously, economic researchers have looked at the predictive power of these indexes for consumer spending, and they generally found that the ability of consumer confidence measures to predict consumer spending largely disappeared once some other measures of economic conditions were taken into account. One such example is a study by Sydney Ludvigson, which examined the forecasting record of these confidence measures through 2002 (for other examples, see here and here). Much has happened since then, of course, and a simple inspection of the two series reveals that both confidence measures fell fairly steadily starting in August 2007 until reaching near-record lows by June 2008. Therefore, a look at the more recent predictive track record of these indicators seems warranted.
For this examination, we conducted an out-of-sample forecasting experiment using a pair of statistical models (technically, Bayesian vector autoregression models). The first model predicts real personal consumption expenditures as a function of its own past values and past values of other variables such as real measures of stock market prices and disposable personal income. The second model includes all of these variables augmented by the two measures of consumer attitudes. At each point in time we use only the data that would have been available to forecast real consumption data anywhere from one to 12 months out. (For example, in the middle of February 2009, consumption data would have been available through December 2008 while some of the other variables would have been available through January or February 2009. The experiment is not "real time" in the sense that we use the latest vintage of data, which include revisions to the historical data that would not have been available to forecasters at the time.) Forecasts of consumption are made for the 1990–2003 period and then again for the period from 2004 to the present. The root mean squared forecast error is used to gauge the accuracy of the forecasts, with smaller numbers corresponding to smaller misses on average. As the accompanying chart shows, adding the two measures of consumer attitudes improves the forecast much more in the post-2003 sample than in the earlier period.
We experimented with some variations in specifications of the model, and we were unable to overturn the general finding that adding attitude measures to the model resulted in an improvement in forecasts in recent years. We found this fact intriguing and somewhat surprising.
A recent paper by Barsky and Solon argues that the Index of Consumer Sentiment reflects the public's awareness of economic conditions. In fact, the survey used to construct this index asks respondents about recent news they have heard related to changes in economic conditions. From August 2007 to June 2008, news of "unfavorable higher prices" was frequently mentioned in the survey. A study by James Hamilton showed that part of the deterioration in the Index of Consumer Sentiment during this period could be explained by rising energy prices. However, adding a measure of oil prices to our model did not overturn the basic finding of improved consumption spending forecasts in models that included measures of consumer attitudes.
It remains an open question why these measures of consumer attitudes have become more useful in recent years. A statistical anomaly, greater or more accessible news coverage of the economy, and a generally more aware public are all possibilities. If it is just luck, then time will eventually overturn the result. But if these consumer attitude indicators have become a more useful summary of a wide variety of developments in the economy, then their forecasting power will persist. Time and further research will help sort this out.
By Patrick Higgins, an economist at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Another look at consumer sentiment and consumer spending:
September 10, 2009
Economists got it wrong, but why?
Economists definitely received some bad publicity this past week, most prominently in the New York Times, where Paul Krugman asked "How Did Economists Get It So Wrong?," a nonrhetorical question he goes on to answer this way:
"As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth… the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.
"Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of markets—especially financial markets—that can cause the economy’s operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don't believe in regulation."
For at least one part of the Krugman critique, I have some sympathy. On the occasion of a 2005 conference honoring the 25th anniversary of Chris Sims's pathbreaking article "Macroeconomics and Reality"—an article that was itself a critique of empirical practices then dominant in central banks—I had this to say about the dangers of groupthink and questions we might be missing as a consequence:
"We are close to falling dangerously in love with the basic New Keynesian framework, the sticky price aspects of it in particular. Here is a simple observation: In the [statistical models] that are identified in the usual ways, inflation wants to drop like a rock in response to a basic technology shock. Models that engineer significant price inertia don’t want to let that happen…
One final point. In my time at the Fed, I have come to appreciate that most of the really important policy choices have nothing to do with Taylor rules or the like. They have to do with those episodes of financial crisis in which Taylor-like rules are woefully inadequate. Think here October 1987, the period from summer 1997 through the end of 1998, and the aftermath of September 11, 2001."
Though Professor Krugman spends a lot of time attacking acolytes of the so-called "Chicago" school, the fact is that the New Keynesian framework (described here by Greg Mankiw) is the workhorse theory within policymaking circles. If economists were unable to see their way to the macroeconomic consequences of the unfolding crisis, criticism needs to start with that framework.
I think such criticism is warranted, but the thrall of the New Keynesian world view has little to do with how "beautiful" the model is or that it is built on a lot of "impressive-looking mathematics." Quite the opposite. As I said in my 2005 comments, "the dynamics of the policy briefing game seem to favor forecasting performance over theoretical integrity." The models that we use for policy analysis are constructed on the basis of what connects with the facts we see (or think we see) in the data. If these models fail to contemplate things that might happen, it is precisely because there is a bias toward frameworks that explain history.
Robert Lucas zeroed in on this point in his "defence of the dismal science":
"The Economist’s briefing [criticizing the foresight of mainstream economists] also cited as an example of macroeconomic failure the 'reassuring' simulations that Frederic Mishkin, then a governor of the Federal Reserve, presented in the summer of 2007. The charge is that the Fed’s FRB/US forecasting model failed to predict the events of September 2008. Yet the simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring. Until the Lehman failure the recession was pretty typical of the modest downturns of the post-war period. There was a recession under way, led by the decline in housing construction. Mr Mishkin's forecast was a reasonable estimate of what would have followed if the housing decline had continued to be the only or the main factor involved in the economic downturn."
Some attempts have been made to exploit the information contained in data from the Great Depression. (If you have patience for technical analysis you can find an example here.) And there have been many attempts to jerry-rig existing models to capture the financial shocks and their aftermath, especially once we had seen what that sort of reality looks like. But, by and large, the last year has been a data point we haven’t seen before, and it is not so surprising that models designed to capture the average quarter in the economy’s life would not do so well when very unaverage events arise.
It is certainly clear that the dominant pre-2007 strain of New Keynesian models was inadequate to the task that would confront us post-2007. That this was the case was not unknown. If I may quote myself again:
"I have in the past agreed that it is useful to think of the policy choices [following financial market events like the stock market crash of 1987] as policy shocks. I would still argue that today. But it sure would be helpful if at least some of these events would appear as something more than completely random disturbances. In other words, it would be very useful to have usable measures of what we loosely call 'financial market fragility,' and more useful still to have a coherent [sophisticated] quantitative model that captures them."
The problem with that prescription was that the relative infrequency of such events would likely have required us to step outside of our existing data-driven policy models and apply more theory, not less.
So does all this lead to the conclusion that we ought to ditch the presumptions of rationality and (largely) efficient markets, as Professor Krugman suggests? I have my doubts. Even some of the examples in the Krugman article seem to rely on the power of those ideas. In describing the problem of the lower bound of zero on nominal federal funds rates, he says this:
"During a normal recession, the Fed responds by buying Treasury bills—short-term government debt—from banks. This drives interest rates on government debt down; investors seeking a higher rate of return move into other assets, driving other interest rates down as well; and normally these lower interest rates eventually lead to an economic bounceback…
"But zero, it turned out, isn’t low enough to end this recession. And the Fed can't push rates below zero, since at near-zero rates investors simply hoard cash rather than lending it out. So by late 2008, with interest rates basically at what macroeconomists call the 'zero lower bound' even as the recession continued to deepen, conventional monetary policy had lost all traction."
That whole story relies on a conventional monetary transmission mechanism, one that fundamentally plays off of efficient markets thinking.
In another passage from the New York Times article, we have this:
"I like to explain the essence of Keynesian economics with a true story that also serves as a parable, a small-scale version of the messes that can afflict entire economies. Consider the travails of the Capitol Hill Baby-Sitting Co-op.
"This co-op, whose problems were recounted in a 1977 article in The Journal of Money, Credit and Banking, was an association of about 150 young couples who agreed to help one another by baby-sitting for one another’s children when parents wanted a night out. To ensure that every couple did its fair share of baby-sitting, the co-op introduced a form of scrip: coupons made out of heavy pieces of paper, each entitling the bearer to one half-hour of sitting time…
"Unfortunately, it turned out that the co-op’s members, on average, wanted to hold a reserve of more than 20 coupons, perhaps, in case they should want to go out several times in a row. As a result, relatively few people wanted to spend their scrip and go out, while many wanted to baby-sit so they could add to their hoard. But since baby-sitting opportunities arise only when someone goes out for the night, this meant that baby-sitting jobs were hard to find, which made members of the co-op even more reluctant to go out, making baby-sitting jobs even scarcer…
"In short, the co-op fell into a recession."
That's a great example, but where is the irrationality? That tight monetary policy might cause a downturn in the economy may be absent from purely classical models, but it is dead center of the New Keynesian framework. The problem was that our mechanism for capturing monetary nonneutrality—essentially wage and price stickiness—was far too simplistic to capture the shocks that we were about to face (and that we arguably faced to lesser degrees during past financial market events).
In short, I accept the criticism that the dominant New Keynesian framework for forecasting and economic modeling needs some work (to say the least). I'm less convinced that we require a major paradigm shift. Despite suggestions to the contrary, I've yet to see the evidence that progress requires moving beyond the intellectual boundaries in which most economists already live.
By David Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Economists got it wrong, but why?:
August 06, 2009
Every recovery is the same; each recovery is different
Two weeks ago, macroblog looked at the rather pessimistic expectations for what the economic recovery might look like this time around. Included was part of the narrative noting that structural adjustments are likely to impede a quick snapback in gross domestic product (GDP) over the coming quarters.
Macroblog reader Bryan Lassiter asked, "Do economists typically predict a weaker recovery than history suggests?" Good question. To state the question in a slightly different way, "Has the United States ever been in a situation where it experienced a deep recession and forecasters subsequently predicted a slow recovery that ultimately proved to be incorrectly pessimistic?"
To get at these questions, we can look at real-time real GDP data and the Survey of Professional Forecasters (SPF) available from the Federal Reserve Bank of Philadelphia (while the SPF started in 1968, forecasts of real GDP began in 1981).
The chart plots the depth of the recession on the x axis and strength of recovery on the y axis (updated from the 7/24 post to include last Friday's GDP release). The blue diamonds were constructed using forecasts that were made in the quarter the recession officially ended; the red squares are what actually happened.
To illustrate the exercise, pretend we're back in the fourth quarter of 2001 and the recession is over (although we didn't know it). Given what we thought we knew about the economy at the time, we can look at what forecasters were expecting in terms of GDP and compare it with what was ultimately reported by the U.S. Bureau of Economic Analysis. Looking at the 2001 recession, we can see that the expectations for recovery were not that far off, but the severity of the recession was lessened—partly because of data revisions and partly because of forecast error. The 1990–91 recession showed a similar pattern, but in reverse. That is, the recovery forecasts were close to the actual experience, but the depth of the recession ended up being more severe than initially thought.
What stands out in the chart is the recovery following the 1981–82 recession. In real time, four-quarter GDP growth was expected to be about 3.5 percent but wound up being much stronger at nearly 8 percent. In this instance, the response is yes to the initial question of whether economists typically predict a weaker recovery. With the 1981–82 episode, we saw a recession where economists had forecast a recovery that ultimately turned out to be much stronger than anticipated. However, the 1981–82 blue diamond was still relatively close to the cluster of other recessions on the chart, meaning the recovery forecast was not exceptionally weak. Thus, the current recession still seems to be an outlier. Given the almost 4 percent decline experienced in GDP, the hope would be to see something stronger than the 2.5 percent growth expected over the next year.
Whatever the impediments to a sharp recovery, forecasts are certainly telling us that economists are treating this recession as being different from previous ones.
By Mike Hammill, economic policy analyst at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Every recovery is the same; each recovery is different: