June 24, 2014
Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 2)
On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.
This is the second of three posts based on that talk. Yesterday's post considered the median CPI and other trimmed-mean measures.
Is it more expensive, or does it just cost more money? Inflation versus the cost of living
Let me make two claims that I believe are, separately, uncontroversial among economists. Jointly, however, I think they create an incongruity for how we think about and measure inflation.
The first claim is that over time, inflation is a monetary phenomenon. It is caused by too much money chasing a limited number of things to buy with that money. As such, the control of inflation is rightfully the responsibility of the institution that has monopoly control over the supply of money—the central bank.
My second claim is that the cost of living is a real concept, and changes in the cost of living will occur even in a world without money. It is a description of how difficult it is to buy a particular level of well-being. Indeed, to a first approximation, changes in the cost of living are beyond the ability of a central bank to control.
For this reason, I think it is entirely appropriate to think about whether the cost of living in New York City is rising faster or slower than in Cleveland, just as it is appropriate to ask whether the cost of living of retirees is rising faster or slower than it is for working-aged people. The folks at the Bureau of Labor Statistics produce statistics that can help us answer these and many other questions related to how expensive it is to buy the happiness embodied in any particular bundle of goods.
But I think it is inappropriate for us to think about inflation, the object of central bank control, as being different in New York than it is in Cleveland, or to think that inflation is somehow different for older citizens than it is for younger citizens. Inflation is common to all things valued by money. Yet changes in the cost of living and inflation are commonly talked about as if they are the same thing. And this creates both a communication and a measurement problem for the Federal Reserve and other central banks around the world.
Here is the essence of the problem as I see it: money is not only our medium of exchange but also our numeraire—our yardstick for measuring value. Embedded in every price change, then, are two forces. The first is real in the sense that the good is changing its price in relation to all the other prices in the market basket. It is the cost adjustment that motivates you to buy more or less of that good. The second force is purely nominal. It is a change in the numeraire caused by an imbalance in the supply and demand of the money being provided by the central bank. I think the concept of "core inflation" is all about trying to measure changes in this numeraire. But to get there, we need to first let go of any "real" notion of our price statistics. Let me explain.
As a cost-of-living approximation, the weights the Bureau of Labor Statistics (BLS) uses to construct the Consumer Price Index (CPI) are based on some broadly representative consumer expenditures. It is easy to understand that since medical care costs are more important to the typical household budget than, say, haircuts, these costs should get a greater weight in the computation of an individual's cost of living. But does inflation somehow affect medical care prices differently than haircuts? I'm open to the possibility that the answer to this question is yes. It seems to me that if monetary policy has predictable, real effects on the economy, then there will be a policy-induced disturbance in relative prices that temporarily alters the cost of living in some way.
But if inflation is a nominal experience that is independent of the cost of living, then the inflation component of medical care is the same as that in haircuts. No good or service, geographic region, or individual experiences inflation any differently than any other. Inflation is a common signal that ultimately runs through all wages and prices.
And when we open up to the idea that inflation is a nominal, not-real concept, we begin to think about the BLS's market basket in a fundamentally different way than what the BLS intends to measure.
This, I think, is the common theme that runs through all measures of "core" inflation. Can the prices the BLS collects be reorganized or reweighted in a way that makes the aggregate price statistic more informative about the inflation that the central bank hopes to control? I think the answer is yes. The CPI excluding food and energy is one very crude way. Food and energy prices are extremely volatile and certainly point to nonmonetary forces as their primary drivers.
In the early 1980s, Otto Eckstein defined core inflation as the trend growth rate of the cost of the factors of production—the cost of capital and wages. I would compare Eckstein's measure to the "inflation expectations" component that most economists (and presumably the FOMC) think "anchor" the inflation trend.
The sticky-price CPI
Brent Meyer and I have taken this idea to the CPI data. One way that prices appear to be different is in their observed "stickiness." That is, some prices tend to change frequently, while others do not. Prices that change only infrequently are likely to be more forward-looking than are those that change all the time. So we can take the CPI market basket and separate it into two groups of prices—prices that tend to be flexible and those that are "sticky" (a separation made possible by the work of Mark Bils and Peter J. Klenow).
Indeed, we find that the items in the CPI market basket that change prices frequently (about 30 percent of the CPI) are very responsive to changes in economic conditions, but do not seem to have a very forward-looking character. But the 70 percent of the market basket items that do not change prices very often—these are accounted for in the sticky-price CPI—appear to be largely immune to fluctuations in the business conditions and are better predictors of future price behavior. In other words, we think that some "inflation-expectation" component exists to varying degrees within each price. By reweighting the CPI market basket in a way that amplifies the behavior of the most forward-looking prices, the sticky-price CPI gives policymakers a perspective on the inflation experience that the headline CPI can't.
Here is what monthly changes in the sticky-price CPI look like compared to the all-items CPI and the traditional "core" CPI.
Let me describe another, more radical example of how we might think about reweighting the CPI market basket to measure inflation—a way of thinking that is very different from the expenditure-basket approach the BLS uses to measure the cost of living.
If we assume that inflation is ultimately a monetary event and, moreover, that the signal of this monetary inflation can be found in all prices, then we might use statistical techniques to help us identify that signal from a large number of price data. The famous early-20th-century economist Irving Fisher described the problem as trying to track a swarm of bees by abstracting from the individual, seemingly chaotic behavior of any particular bee.
Cecchetti and I experimented along these lines to measure a common signal running through the CPI data. The basic idea of our approach was to take the component data that the BLS supplied, make a few simple identifying assumptions, and let the data itself determine the appropriate weighting structure of the inflation estimate. The signal-extraction method we chose was a dynamic-factor index approach, and while we didn't pursue that work much further, others did, using more sophisticated and less restrictive signal-extraction methods. Perhaps most notable is the work of Ricardo Reis and Mark Watson.
To give you a flavor of the approach, consider the "first principal component" of the CPI price-change data. The first principal component of a data series is a statistical combination of the data that accounts for the largest share of their joint movement (or variance). It's a simple, statistically shared component that runs through all the price data.
This next chart shows the first principal component of the CPI price data, in relation to the headline CPI and the core CPI.
Again, this is a very different animal than what the folks at the BLS are trying to measure. In fact, the weights used to produce this particular common signal in the price data bear little similarity to the expenditure weights that make up the market baskets that most people buy. And why should they? The idea here doesn't depend on how important something is to the well-being of any individual, but rather on whether the movement in its price seems to be similar or dissimilar to the movements of all the other prices.
In the table below, I report the weights (or relative importance) of a select group of CPI components and the weights they would get on the basis of their contribution to the first principal component.
While some criticize the CPI because it over weights housing from a cost-of-living perspective, it may be these housing components that ought to be given the greatest consideration when we think about the inflation that the central bank controls. Likewise, according to this approach, restaurant costs, motor vehicle repairs, and even a few food components should be taken pretty seriously in the measurement of a common inflation signal running through the price data.
And what price movements does this approach say we ought to ignore? Well, gasoline prices for one. But movements in the prices of medical care commodities, communications equipment, and tobacco products also appear to move in ways that are largely disconnected from the common thread in prices that runs through the CPI market basket.
But this and other measures of "core" inflation are very much removed from the cost changes that people experience on a monthly basis. Does that cause a communications problem for the Federal Reserve? This will be the subject of my final post.
By Mike Bryan, vice president and senior economist in the Atlanta Fed's research department
June 23, 2014
Torturing CPI Data until They Confess: Observations on Alternative Measures of Inflation (Part 1)
On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.
In this, and the following two blogs, I'll be posting a modestly edited version of that talk. A full version of my prepared remarks will be posted along with the third installment of these posts.
The ideas expressed in these blogs and the related speech are my own, and do not necessarily reflect the views of the Federal Reserve Banks of Atlanta or Cleveland.
Part 1: The median CPI and other trimmed-mean estimators
A useful place to begin this conversation, I think, is with the following chart, which shows the monthly change in the Consumer Price Index (CPI) (through April).
The monthly CPI often swings between a negative reading and a reading in excess of 5 percent. In fact, in only about one-third of the readings over the past 16 years was the monthly, annualized seasonally adjusted CPI within a percentage point of 2 percent, which is the FOMC's longer-term inflation target. (Officially, the FOMC's target is based on the Personal Consumption Expenditures price index, but these and related observations hold for that price index equally well.)
How should the central bank think about its price-stability mandate within the context of these large monthly CPI fluctuations? For example, does April's 3.2 percent CPI increase argue that the FOMC ought to do something to beat back the inflationary threat? I don't speak for the FOMC, but I doubt it. More likely, there were some unusual price movements within the CPI's market basket that can explain why the April CPI increase isn't likely to persist. But the presumption that one can distinguish the price movements we should pay attention to from those that we should ignore is a risky business.
The Economist retells a conversation with Stephen Roach, who in the 1970s worked for the Federal Reserve under Chairman Arthur Burns. Roach remembers that when oil prices surged around 1973, Burns asked Federal Reserve Board economists to strip those prices out of the CPI "to get a less distorted measure. When food prices then rose sharply, they stripped those out too—followed by used cars, children's toys, jewellery, housing and so on, until around half of the CPI basket was excluded because it was supposedly 'distorted'" by forces outside the control of the central bank. The story goes on to say that, at least in part because of these actions, the Fed failed to spot the breadth of the inflationary threat of the 1970s.
I have a similar story. I remember a morning in 1991 at a meeting of the Federal Reserve Bank of Cleveland's board of directors. I was welcomed to the lectern with, "Now it's time to see what Mike is going to throw out of the CPI this month." It was an uncomfortable moment for me that had a lasting influence. It was my motivation for constructing the Cleveland Fed's median CPI.
I am a reasonably skilled reader of a monthly CPI release. And since I approached each monthly report with a pretty clear idea of what the actual rate of inflation was, it was always pretty easy for me to look across the items in the CPI market basket and identify any offending—or "distorted"—price change. Stripping these items from the price statistic revealed the truth—and confirmed that I was right all along about the actual rate of inflation.
Let me show you what I mean by way of the April CPI report. The next chart shows the annualized percentage change for each component in the CPI for that month. These are shown on the horizontal axis. The vertical axis shows the weight given to each of these price changes in the computation of the overall CPI. Taken as a whole, the CPI jumped 3.2 percent in April. But out there on the far right tail of this distribution are gasoline prices. They rose about 32 percent for the month. If you subtract out gasoline from the April CPI report, you get an increase of 2.1 percent. That's reasonably close to price stability, so we can stop there—mission accomplished.
But here's the thing: there is no such thing as a "nondistorted" price. All prices are being influenced by market forces and, once influenced, are also influencing the prices of all the other goods in the market basket.
What else is out there on the tails of the CPI price-change distribution? Lots of stuff. About 17 percent of things people buy actually declined in price in April while prices for about 13 percent of the market basket increased at rates above 5 percent.
But it's not just the tails of this distribution that are worth thinking about. Near the center of this price-change distribution is a very high proportion of things people buy. For example, price changes within the fairly narrow range of between 1.5 percent and 2.5 percent accounted for about 26 percent of the overall CPI market basket in the April report.
The April CPI report is hardly unusual. The CPI report is commonly one where we see a very wide range of price changes, commingled with an unusually large share of price increases that are very near the center of the price-change distribution. Statisticians call this a distribution with a high level of "excess kurtosis."
The following chart shows what an average monthly CPI price report looks like. The point of this chart is to convince you that the unusual distribution of price changes we saw in the April CPI report is standard fare. A very high proportion of price changes within the CPI market basket tends to remain close to the center of the distribution, and those that don't tend to be spread over a very wide range, resulting in what appear to be very elongated tails.
And this characterization of price changes is not at all special to the CPI. It characterizes every major price aggregate I have ever examined, including the retail price data for Brazil, Argentina, Mexico, Columbia, South Africa, Israel, the United Kingdom, Sweden, Canada, New Zealand, Germany, Japan, and Australia.
Why do price change distributions have peaked centers and very elongated tails? At one time, Steve Cecchetti and I speculated that the cost of unplanned price changes—called menu costs—discourage all but the most significant price adjustments. These menu costs could create a distribution of observed price changes where a large number of planned price adjustments occupy the center of the distribution, commingled with extreme, unplanned price adjustments that stretch out along its tails.
But absent a clear economic rationale for this unusual distribution, it presents a measurement problem and an immediate remedy. The problem is that these long tails tend to cause the CPI (and other weighted averages of prices) to fluctuate pretty widely from month to month, but they are, in a statistical sense, tethered to that large proportion of price changes that lie in the center of the distribution.
So my belated response to the Cleveland board of directors was the computation of the weighted median CPI (which I first produced with Chris Pike). This statistic considers only the middle-most monthly price change in the CPI market basket, which becomes the representative aggregate price change. The median CPI is immune to the obvious analyst bias that I had been guilty of, while greatly reducing the volatility in the monthly CPI report in a way that I thought gave the Federal Reserve Bank of Cleveland a clearer reading of the central tendency of price changes.
Cecchetti and I pushed the idea to a range of trimmed-mean estimators, for which the median is simply an extreme case. Trimmed-mean estimators trim some proportion of the tails from this price-change distribution and reaggregate the interior remainder. Others extended this idea to asymmetric trims for skewed price-change distributions, as Scott Roger did for New Zealand, and to other price statistics, like the Federal Reserve Bank of Dallas's trimmed-mean PCE inflation rate.
How much one should trim from the tails isn't entirely obvious. We settled on the 16 percent trimmed mean for the CPI (that is, trimming the highest and lowest 8 percent from the tails of the CPI's price-change distribution) because this is the proportion that produced the smallest monthly volatility in the statistic while preserving the same trend as the all-items CPI.
The following chart shows the monthly pattern of the median CPI and the 16 percent trimmed-mean CPI relative to the all-items CPI. Both measures reduce the monthly volatility of the aggregate price measure by a lot—and even more so than by simply subtracting from the index the often-offending food and energy items.
But while the median CPI and the trimmed-mean estimators are often referred to as "core" inflation measures (and I am guilty of this myself), these measures are very different from the CPI excluding food and energy.
In fact, I would not characterize these trimmed-mean measures as "exclusionary" statistics at all. Unlike the CPI excluding food and energy, the median CPI and the assortment of trimmed-mean estimators do not fundamentally alter the underlying weighting structure of the CPI from month to month. As long as the CPI price change distribution is symmetrical, these estimators are designed to track along the same path as that laid out by the headline CPI. It's just that these measures are constructed so that they follow that path with much less volatility (the monthly variance in the median CPI is about 95 percent smaller than the all-items CPI and about 25 percent smaller than the CPI less food and energy).
I think of the trimmed-mean estimators and the median CPI as being more akin to seasonal adjustment than they are to the concept of core inflation. (Indeed, early on, Cecchetti and I showed that the median CPI and associated trimmed-mean estimates also did a good job of purging the data of its seasonal nature.) The median CPI and the trimmed-mean estimators are noise-reduced statistics where the underlying signal being identified is the CPI itself, not some alternative aggregation of the price data.
This is not true of the CPI excluding food and energy, nor necessarily of other so-called measures of "core" inflation. Core inflation measures alter the weights of the price statistic so that they can no longer pretend to be approximations of the cost of living. They are different constructs altogether.
The idea of "core" inflation is one of the topics of tomorrow's post.
By Mike Bryan, vice president and senior economist in the Atlanta Fed's research department
June 20, 2014
The Wrong Question?
Just before Wednesday's confirmation from Fed Chairwoman Janet Yellen that the Federal Open Market Committee (FOMC) does indeed still see slack in the labor market, Jon Hilsenrath and Victoria McGrane posted a Wall Street Journal article calling notice to the state of debate:
Nearly four-fifths of those who became long-term unemployed during the worst period of the downturn have since migrated to the fringes of the job market, a recent study shows, rarely seeking work, taking part-time posts or bouncing between unsteady jobs. Only one in five, according to the study, has returned to lasting full-time work since 2008.
Deliberations over the nature of the long-term unemployed are particularly lively within the Federal Reserve.... Fed officials face a conundrum: Should they keep trying to spur economic growth and hiring by holding short-term interest rates near zero, or will those low rates eventually spark inflation without helping those long out of work?
The article goes on to provide a nice summary of the ongoing back-and-forth among economists on whether the key determinant of slack in the labor market is the long-term unemployed or the short-term unemployed. Included in that summary, checking in on the side of "both," is research by Chris Smith at the Federal Reserve Board of Governors.
We are fans of Smith's work, but think that the Wall Street Journal summary buries its own lede by focusing on the long-term/short-term unemployment distinction rather than on what we think is the more important part of the story: In Hilsenrath and McGrane's words, those "taking part-time posts."
We are specifically talking about the group officially designated as part-time for economic reasons (PTER). This is the group of people in the U.S. Bureau of Labor Statistics' Household Survey who report they worked less than 35 hours in the reference week due to an economic reason such as slack work or business conditions.
We have previously noted that the long-term unemployed have been disproportionately landing in PTER jobs. We have also previously argued that PTER emerges as a key negative influence on earnings over the course of the recovery, and remains so (at least as of the end of 2013). For reference, here is a chart describing the decomposition from our previous post (which corrects a small error in the data definitions):
Our conclusion, clearly identified in the chart, was that short-term unemployment and PTER have been statistically responsible for the tepid growth in wages over the course of the recovery. What's more, as short-term unemployment has effectively returned to prerecession levels, PTER has increasingly become the dominant negative influence.
Our analysis was methodologically similar to Smith's—his work and the work represented in our previous post were both based on annual state-level microdata from the Current Population Survey, for example. They were not exactly comparable, however, because of different wage variables—Smith used the median wage while we use a composition-adjusted weighted average—and different regression controls.
Here is what we get when we impose the coefficient estimates from Smith's work into our attempt to replicate his wage definition:
Some results change. The unemployment variables, short-term or long-term, no longer show up as a drag in wage growth. The group of workers designated as "discouraged" do appear to be pulling down wage growth and in ways that are distinct from the larger group of marginally attached. (That is in contrast to arguments some of us have previously made in macroblog that looked at the propensity of the marginally attached to find employment.)
It is not unusual to see results flip around a bit in statistical work as this or that variable is changed, or as the structure of the empirical specifications is tweaked. It is a robustness issue that should always be acknowledged. But what does appear to emerge as a consistent negative influence on wage growth? PTER.
None of this means that the short-term/long-term unemployment debate is unimportant. The statistics are not strong enough for us to be ruling things out categorically. Furthermore, that debate has raised some really interesting questions, such as Glenn Rudebusch and John Williams's recent suggestion that the definition of economic slack relevant for the FOMC's employment mandate may be different from the definition appropriate to the FOMC's price stability mandate.
Our message is pretty simple and modest, but we think important. Whatever your definition of slack, it really ought to include PTER. If not, you are probably asking the wrong question.
By Dave Altig, executive vice president and research director, and
Pat Higgins, a senior economist, both of the Atlanta Fed's research department
June 09, 2014
Looking Beyond the Job-Finding Rate: The Difficulty of Finding Full-Time Work
Despite Friday´s report of a further solid increase in payroll employment, the utilization picture for the official labor force remains mixed. The rate of short-term and long-term unemployment as well as the share of the labor force working part time who want to work full time (a cohort also referred to as working part time for economic reasons, or PTER) rose during the recession.
The short-term unemployment rate has since returned to levels experienced before the recession. In contrast, longer-term unemployment and involuntary part-time work have declined, but both remain well above prerecession levels (see the chart).
Some of the postrecession decline in the short-term unemployment rate has not resulted from the short-term unemployed finding a job, but rather the opposite—they failed to get a job and became longer-term unemployed. Before the recession, the number of unemployed workers who said they had been looking for a job for more than half a year accounted for about 18 percent of unemployed workers. Currently, that share is close to 36 percent.
Moreover, job finding by unemployed workers might not completely reflect a decline in the amount of slack labor resources if some want full-time work but only find part-time work (that is, are working PTER). In this post, we investigate the ability of the unemployed to become fully employed relative to their experience before the Great Recession.
The job-finding rate of unemployed workers (the share of unemployed who are employed the following month) generally decreases toward zero with the length of the unemployment spell. Job-finding rates fell for all durations of unemployment in the recession.
Since the end of the recession, job-finding rates have improved, especially for shorter-term unemployed, but remain well below prerecession levels. The overall job-finding rate stood at close to 28 percent in 2007 and was about 20 percent for the first four months of 2014. The chart below shows the job-finding rates for select years by unemployment duration:
What about the jobs that the unemployed find? Most unemployed workers want to work full-time hours (at least 35 hours a week). In 2007, around 75 percent of job finders wanted full-time work and either got full-time work or worked PTER (the remainder worked part time for noneconomic reasons). For the first four months of 2014, the share wanting full-time work was also about 75 percent. But the portion of job finders wanting full-time work and only finding part-time work increased from about 22 percent in 2007 to almost 30 percent in 2014, and this job-finding underutilization share has become especially high for the longer-term unemployed.
The chart below displays the job-finding underutilization share for select years by unemployment duration. (You can also read further analysis of PTER dynamics by our colleagues at the Federal Reserve Board of Governors.)
Finding a job is one thing, but finding a satisfactory job is another. Since the end of the recession, the number of unemployed has declined, thanks in part to a gradually improving rate of job finding. But the job-finding rate is still relatively low, and the ability of an unemployed job seeker who wants to work full-time to actually find full-time work remains a significant challenge.
John Robertson, a vice president and senior economist and
Ellyn Terry, a senior economic analyst, both of the Atlanta Fed's research department
June 02, 2014
How Discouraged Are the Marginally Attached?
Of the many statistical barometers of the U.S. economy that we monitor here at the Atlanta Fed, there are few that we await more eagerly than the monthly report on employment conditions. The May 2014 edition arrives this week and, like many others, we will be more interested in the underlying details than in the headline job growth or unemployment numbers.
One of those underlying details—the state of the pool of “discouraged” workers (or, maybe more precisely, potential workers)—garnered special attention lately in the wake of the relatively dramatic decline in the ranks of the official labor force, a decline depicted in the April employment survey from the U.S. Bureau of Labor Statistics. That attention included some notable commentary from Federal Reserve officials.
Federal Reserve Bank of New York President William Dudley, for example, recently suggested that a sizeable part of the decline in labor force participation since 2007 can be tied to discouraged workers exiting the workforce. This suggestion follows related comments from Federal Reserve Chair Janet Yellen in her press conference following the March meeting of the Federal Open Market Committee:
So I have talked in the past about indicators I like to watch or I think that are relevant in assessing the labor market. In addition to the standard unemployment rate, I certainly look at broader measures of unemployment… Of course, I watch discouraged and marginally attached workers… it may be that as the economy begins to strengthen, we could see labor force participation flatten out for a time as discouraged workers start moving back into the labor market. And so that's something I'm watching closely.
What may not be fully appreciated by those not steeped in the details of the employment statistics is that discouraged workers are actually a subset of “marginally attached” workers. Among the marginally attached—individuals who have actively sought employment within the most recent 12-month period but not during the most recent month—are indeed those who report that they are out of the labor force because they are discouraged. But the marginally attached also include those who have not recently sought work because of family responsibilities, school attendance, poor health, or other reasons.
In fact, most of the marginally attached are not classified (via self-reporting) as discouraged (see the chart):
At the St. Louis Fed, B. Ravikumar and Lin Shao recently published a report containing some detailed analysis of discouraged workers and their relationship to the labor force and the unemployment rate. As Ravikumar and Shao note,
Since discouraged workers are not actively searching for a job, they are considered nonparticipants in the labor market—that is, they are neither counted as unemployed nor included in the labor force.
More importantly, the authors point out that they tend to reenter the labor force at relatively high rates:
Since December 2007, on average, roughly 40 percent of discouraged workers reenter the labor force every month.
Therefore, it seems appropriate to count some fraction of the jobless population designated as discouraged (and out of the labor force) as among the officially unemployed.
We believe this logic should be extended to the entire group of marginally attached. As we've pointed out in the past, the marginally attached group as a whole also has a roughly 40 percent transition rate into the labor force. Even though more of the marginally attached are discouraged today than before the recession, the changing distribution has not affected the overall transition rate of the marginally attached into the labor force.
In fact, in terms of the propensity to flow into employment or officially measured unemployment, there is little to distinguish the discouraged from those who are marginally attached but who have other reasons for not recently seeking a job (see the chart):
What we take from these data is that, as a first pass, when we are talking about discouraged workers' attachment to the labor market, we are talking more generally about the marginally attached. And vice versa. Any differences in the demographic characteristics between discouraged and nondiscouraged marginally attached workers do not seem to materially affect their relative labor market attachment and ability to find work.
Sometimes labels matter. But in the case of discouraged marginally attached workers versus the nondiscouraged marginally attached workers—not so much.
By Dave Altig, executive vice president and research director,
John Robertson, a vice president and senior economist, and
Ellyn Terry, a senior economic analyst, all of the Atlanta Fed's research department
May 20, 2014
Where Do Young Firms Get Financing? Evidence from the Atlanta Fed Small Business Survey
During last week's "National Small Business Week," Janet Yellen delivered a speech titled "Small Business and the Recovery," in which she outlined how the Fed's low-interest-rate policies have helped small businesses.
By putting downward pressure on interest rates, the Fed is trying to make financial conditions more accommodative—supporting asset values and lower borrowing costs for households and businesses and thus encouraging the spending that spurs job creation and a stronger recovery.
In general, I think most small businesses in search of financing would agree with the "rising tide lifts all boats" hypothesis. When times are good, strong demand for goods and services helps provide a solid cash flow, which makes small businesses more attractive to lenders. At the same time, rising equity and housing prices support collateral used to secure financing.
Reduced economic uncertainty and strong income growth can help those in search of equity financing, as investors become more willing and able to open their pocketbooks. But even when the economy is strong, there is a business segment that's had an especially difficult time getting financing. And as we've highlighted in the past, this is also the segment that has had the highest potential to contribute to job growth—namely, young businesses.
Why is it hard for young firms to find credit or financing more generally? At least two reasons come to mind: First, lenders tend to have a rearview-mirror approach for assessing commercial creditworthiness. But a young business has little track record to speak of. Moreover, lenders have good reason to be cautious about a very young firm: half of all young firms don't make it past the fifth year. The second reason is that young businesses typically ask for relatively small amounts of money. (See the survey results in the Credit Demand section under Financing Conditions.) But the fixed cost of the detailed credit analysis (underwriting) of a loan can make lenders decide that it is not worth their while to engage with these young firms.
While difficult, obtaining financing is not impossible. Over the past two years, half of small firms under six years old that participated in our survey (latest results available) were able to obtain at least some of the financing requested over all their applications. This 50-percent figure for young firms strongly contrasts with the 78 percent of more mature small firms that found at least some credit. Nonetheless, some young firms manage to find some credit.
This leads to two questions:
- What types of financing sources are young firms using?
- How are the available financing options changing?
To answer the first question, we pooled all of the financing applications submitted by small firms in our semiannual survey over the past two years and examined how likely they were to apply for financing and be approved across a variety of financing products.
Applications and approvals
While most mature firms (more than five years old) seek—and receive—financing from banks, young firms have about as many approved applications for credit cards, vendor or trade credit, or financing from friends or family as they do for bank credit.
The chart below shows that about two-thirds of applications on behalf of mature firms were for commercial loans and lines of credit at banks and about 60 percent of those applications were at least partially approved. In comparison, fewer than half of applications by young firms were for a commercial bank loan or line of credit, fewer than a third of which were approved. Further, about half of the applications by mature firms were met in full compared to less than one-fifth of applications by young firms.
In the survey, we also ask what type of bank the firm applied to (large national bank, regional bank, or community bank). It turns out this distinction matters little for the young firms in our sample—the vast majority are denied regardless of the size of the bank. However, after the five-year mark, approval is highest for firms applying at the smallest banks and lowest for large national banks. For example, firms that are 10 years or older that applied at a community bank, on average, received most of the amount requested, and those applying at large national banks received only some of the amount requested.
Half of young firms and about one-fifth of mature firms in the survey reported receiving none of the credit requested over all their applications. How are firms that don't receive credit affected? According to a 2013 New York Fed small business credit survey, 42 percent of firms that were unsuccessful at obtaining credit said it limited their business expansion, 16 percent said they were unable to complete an existing order, and 16 percent indicated that it prevented hiring.
This leads to the next couple of questions: How are the available options for young firms changing? Is the market evolving in ways that can better facilitate lending to young firms?
When thinking about the places where young firms seem to be the most successful in obtaining credit, equity investments or loans from friends and family ranked the highest according to the Atlanta Fed survey, but this source is not highly used (see the first chart). Is the low usage rate a function of having only so many "friends and family" to ask? If it is, then perhaps alternative approaches such as crowdfunding could be a viable way for young businesses seeking small amounts of funds to broaden their financing options. Interestingly, crowdfunding serves not just as a means to raise funds, but also as a way to reach more customers and potential business partners.
A variety of types of new lending sources, including crowdfunding, were featured at the New York Fed's Small Business Summit ("Filling the Gaps") last week. One major theme of the summit was that credit providers are increasingly using technology to decrease the credit search costs for the borrower and lower the underwriting costs of the lender. And when it comes to matching borrowers with lenders, there does appear to be room for improvement. The New York Fed's small business credit survey, for example, showed that small firms looking for credit spent an average of 26 hours searching during the first half of 2013. Some of the financial services presented at the summit used electronic financial records and relevant business data, including business characteristics and credit scores to better match lenders and borrowers. Another theme to come out of the summit was the importance of transparency and education about the lending process. This was considered to be especially important at a time when the small business lending landscape is changing rapidly.
The full results of the Atlanta Fed's Q1 2014 Small Business Survey are available on the website.
By Ellyn Terry, an economic policy analysis specialist in the Atlanta Fed's research department
May 16, 2014
Which Flavor of QE?
Yesterday's report on consumer price inflation from the U.S. Bureau of Labor Statistics moved the needle a bit on inflation trends—but just a bit. Meanwhile, the European Central Bank appears to be locked and loaded to blast away at its own (low) inflation concerns. From the Wall Street Journal:
The European Central Bank is ready to loosen monetary policy further to prevent the euro zone from succumbing to an extended period of low inflation, its vice president said on Thursday.
"We are determined to act swiftly if required and don't rule out further monetary policy easing," ECB Vice President Vitor Constancio said in a speech in Berlin.
One of the favorite further measures is apparently charging financial institutions for funds deposited with the central bank:
On Wednesday, the ECB's top economist, Peter Praet, in an interview with German newspaper Die Zeit, said the central bank is preparing a number of measures to counter low inflation. He mentioned a negative rate on deposits as a possible option in combination with other measures.
I don't presume to know enough about financial institutions in Europe to weigh in on the likely effectiveness of such an approach. I do know that we have found reasons to believe that there are limits to such a tool in the U.S. context, as the New York Fed's Ken Garbade and Jamie McAndrews pointed out a couple of years back.
In part, the desire to think about an option such as negative interest rates on deposits appears to be driven by considerable skepticism about deploying more quantitative easing, or QE.
A drawback, in my view, of general discussions about the wisdom and effectiveness of large-scale asset purchase programs is that these policies come in many flavors. My belief, in fact, is that the Fed versions of QE1, QE2, and QE3 can be thought of as three quite different programs, useful to address three quite distinct challenges. You can flip through the slide deck of a presentation I gave last week at a conference sponsored by the Global Interdependence Center, but here is the essence of my argument:
- QE1, as emphasized by former Fed Chair Ben Bernanke, was first and foremost credit policy. It was implemented when credit markets were still in a state of relative disarray and, arguably, segmented to some significant degree. Unlike credit policy, the focus of traditional or pure QE "is the quantity of bank reserves" (to use the Bernanke language). Although QE1 per se involved asset purchases in excess of $1.7 trillion, the Fed's balance sheet rose by less than $300 billion during the program's span. The reason, of course, is that the open-market purchases associated with QE1 largely just replaced expiring lending from the emergency-based facilities in place through most of 2008. In effect, with QE1 the Fed replaced one type of credit policy with another.
- QE2, in contrast, looks to me like pure, traditional quantitative easing. It was a good old-fashioned Treasury-only asset purchase program, and the monetary base effectively increased in lockstep with the size of the program. Importantly, the salient concern of the moment was a clear deterioration of market-based inflation expectations and—particularly worrisome to us at the Atlanta Fed—rising beliefs that outright deflation might be in the cards. In retrospect, old-fashioned QE appears to have worked to address the old-fashioned problem of influencing inflation expectations. In fact, the turnaround in expectations can be clearly traced to the Bernanke comments at the August 2010 Kansas City Fed Economic Symposium, indicating that the Federal Open Market Committee (FOMC) was ready and willing pull the QE tool out of the kit. That was an early lesson in the power of forward guidance, which brings us to...
- ...QE3. I think it is a bit early to draw conclusions about the ultimate impact of QE3. I think you can contend that the Fed's latest large-scale asset purchase program has not had a large independent effect on interest rates or economic activity while still believing that QE3 has played an important role in supporting the economic recovery. These two, seemingly contradictory, opinions echo an argument suggested by Mike Woodford at the Kansas City Fed's Jackson Hole conference in 2012: QE3 was important as a signaling device in early stages of the deployment of the FOMC's primary tool, forward guidance regarding the period of exceptionally low interest rates. I would in fact argue that the winding down of QE3 makes all the more sense when seen through the lens of a forward guidance tool that has matured to the point of no longer requiring the credibility "booster shot" of words put to action via QE.
All of this is to argue that QE, as practiced, is not a single policy, effective in all variants in all circumstances, which means that the U.S. experience of the past might not apply to another time, let alone another place. But as I review the record of the past seven years, I see evidence that pure QE worked pretty well precisely when the central concern was managing inflation expectations (and, hence, I would say, inflation itself).
By Dave Altig, executive vice president and research director of the Atlanta Fed
May 13, 2014
Today’s news brings another indication that low inflation rates in the euro area have the attention of the European Central Bank. From the Wall Street Journal (Update: via MarketWatch):
Germany's central bank is willing to back an array of stimulus measures from the European Central Bank next month, including a negative rate on bank deposits and purchases of packaged bank loans if needed to keep inflation from staying too low, a person familiar with the matter said...
This marks the clearest signal yet that the Bundesbank, which has for years been defined by its conservative opposition to the ECB's emergency measures to combat the euro zone's debt crisis, is fully engaged in the fight against super-low inflation in the euro zone using monetary policy tools...
Notably, these tools apparently do not include Fed-style quantitative easing:
But the Bundesbank's backing has limits. It remains resistant to large-scale purchases of public and private debt, known as quantitative easing, the person said. The Bundesbank has discussed this option internally but has concluded that with government and corporate bond yields already quite low in Europe, the purchases wouldn't do much good and could instead create financial stability risks.
Should we conclude that there is now a global conclusion about the value and wisdom of large-scale asset purchases, a.k.a. QE? We certainly have quite a bit of experience with large-scale purchases now. But I think it is also fair to say that that experience has yet to yield firm consensus.
You probably don’t need much convincing that QE consensus remains elusive. But just in case, I invite you to consider the panel discussion we titled “Greasing the Skids: Was Quantitative Easing Needed to Unstick Markets? Or Has it Merely Sped Us toward the Next Crisis?” The discussion was organized for last month’s 2014 edition of the annual Atlanta Fed Financial Markets Conference.
Opinions among the panelists were, shall we say, diverse. You can view the entire session via this link. But if you don’t have an hour and 40 minutes to spare, here is the (less than) ten-minute highlight reel, wherein Carnegie Mellon Professor Allan Meltzer opines that Fed QE has become “a foolish program,” Jeffries LLC Chief Market Strategist David Zervos declares himself an unabashed “lover of QE,” and Federal Reserve Governor Jeremy Stein weighs in on some of the financial stability questions associated with very accommodative policy:
You probably detected some differences of opinion there. If that, however, didn’t satisfy your craving for unfiltered debate, click on through to this link to hear Professor Meltzer and Mr. Zervos consider some of Governor Stein’s comments on monitoring debt markets, regulatory approaches to pursuing financial stability objectives, and the efficacy of capital requirements for banks.
By Dave Altig, executive vice president and research director of the Atlanta Fed.
May 09, 2014
How Has Disability Affected Labor Force Participation?
You might be unaware that May is Disability Insurance Awareness Month. We weren’t aware of it until recently, but the issue of disability—as a reason for nonparticipation in the labor market—has been very much on our minds as of late. As we noted in a previous macroblog post, from the fourth quarter of 2007 through the end of 2013, the number of people claiming to be out of the labor force for reasons of illness or disability increased almost 3 million (or 23 percent). The previous post also noted that the incidence of reported nonparticipation as a result of disability/illness is concentrated (unsurprisingly) in the age group from about 51 to 60.
In the past, we have examined the effects of the aging U.S. population on the labor force participation rate (LFPR). However, we have not yet specifically considered how much the aging of the population alone is responsible for the aforementioned increase in disability as a reason for dropping out of the labor force.
The following chart depicts over time the percent (by age group) reporting disability or illness as a reason for not participating in the labor force. Each line represents a different year, with the darkest line being 2013. The chart reveals a long-term trend of rising disability or illness as a reason for labor force nonparticipation for almost every age group.
The chart also shows that disability or illness is cited most often among people 51 to 65 years old—the current age of a large segment of the baby boomer cohort. In fact, the proportion of people in this age group increased from 20 percent in 2003 to 25 percent in 2013.
How much can the change in demographics during the past decade explain the rise in disability or illness as a reason for not participating in the labor market? The answer seems to be: Not a lot.
Following an approach you may have seen in this post, we break down into three components the change in the portion of people not participating in the labor force due to disability or illness. One component measures the change resulting from shifts within age groups (the within effect). Another component measures changes due to population shifts across age groups (the between effect). A third component allows for correlation across the two effects (a covariance term). Here’s what you get:
To recap, only about one fifth of the decline in labor force participation as a result of reported illness or disability can be attributed to the population aging per se. A full three quarters appears to be associated with some sort of behavioral change.
What is the source of this behavioral change? Our experiment can’t say. But given that those who drop out of the labor force for reasons of disability/illness tend not to return, it would be worth finding out. Here is one perspective on the issue.
You can find even more on this topic via the Human Capital Compendium.
By Dave Altig, research director and executive vice president at the Atlanta Fed, and
Ellyn Terry, a senior economic analyst in the Atlanta Fed's research department
April 28, 2014
New Data Sources: A Conversation with Google's Hal Varian
New Data Sources: A Conversation with Google's Hal Varian
In recent years, there has been an explosion of new data coming from places like Google, Facebook, and Twitter. Economists and central bankers have begun to realize that these data may provide valuable insights into the economy that inform and improve the decisions made by policy makers.
As chief economist at Google and emeritus professor at UC Berkeley, Hal Varian is uniquely qualified to discuss the issues surrounding these new data sources. Last week he was kind enough to take some time out of his schedule to answer a few questions about these data, the benefits of using them, and their limitations.
Mark Curtis: You've argued that new data sources from Google can improve our ability to "nowcast." Can you describe what this means and how the exorbitant amount of data that Google collects can be used to better understand the present?
Hal Varian: The simplest definition of "nowcasting" is "contemporaneous forecasting," though I do agree with David Hendry that this definition is probably too simple. Over the past decade or so, firms have spent billions of dollars to set up real-time data warehouses that track business metrics on a daily level. These metrics could include retail sales (like Wal-Mart and Target), package delivery (UPS and FedEx), credit card expenditure (MasterCard's SpendingPulse), employment (Intuit's small business employment index), and many other economically relevant measures. We have worked primarily with Google data, because it's what we have available, but there are lots of other sources.
Curtis: The ability to "nowcast" is also crucially important to the Fed. In his December press conference, former Fed Chairman Ben Bernanke stated that the Fed may have been slow to acknowledge the crisis in part due to deficient real-time information. Do you believe that new data sources such as Google search data might be able to improve the Fed's understanding of where the economy is and where it is going?
Varian: Yes, I think that this is definitely a possibility. The real-time data sources mentioned above are a good starting point. Google data seems to be helpful in getting real-time estimates of initial claims for unemployment benefits, housing sales, and loan modification, among other things.
Curtis: Janet Yellen stated in her first press conference as Fed Chair that the Fed should use other labor market indicators beyond the unemployment rate when measuring the health of labor markets. (The Atlanta Fed publishes a labor market spider chart incorporating a variety of indicators.) Are there particular indicators that Google produces that could be useful in this regard?
Varian: Absolutely. Queries related to job search seem to be indicative of labor market activity. Interestingly, queries having to do with killing time also seem to be correlated with unemployment measures!
Curtis: What are the downsides or potential pitfalls of using these types of new data sources?
Varian: First, the real measures—like credit card spending—are probably more indicative of actual outcomes than search data. Search is about intention, and spending is about transactions. Second, there can be feedback from news media and the like that may distort the intention measures. A headline story about a jump in unemployment can stimulate a lot of "unemployment rate" searches, so you have to be careful about how you interpret the data. Third, we've only had one recession since Google has been available, and it was pretty clearly a financially driven recession. But there are other kinds of recessions having to do with supply shocks, like energy prices, or monetary policy, as in the early 1980s. So we need to be careful about generalizing too broadly from this one example.
Curtis: Given the predominance of new data coming from Google, Twitter, and Facebook, do you think that this will limit, or even make obsolete, the role of traditional government statistical agencies such as Census Bureau and the Bureau of Labor Statistics in the future? If not, do you believe there is the potential for collaboration between these agencies and companies such as Google?
Varian: The government statistical agencies are the gold standard for data collection. It is likely that real-time data can be helpful in providing leading indicators for the standard metrics, and supplementing them in various ways, but I think it is highly unlikely that they will replace them. I hope that the private and public sector can work together in fruitful ways to exploit new sources of real-time data in ways that are mutually beneficial.
Curtis: A few years ago, former Fed Chairman Bernanke challenged researchers when he said, "Do we need new measures of expectations or new surveys? Information on the price expectations of businesses—who are, after all, the price setters in the first instance—as well as information on nominal wage expectations is particularly scarce." Do data from Google have the potential to fill this need?
Varian: We have a new product called Google Consumer Surveys that can be used to survey a broad audience of consumers. We don't have ways to go after specific audiences such as business managers or workers looking for jobs. But I wouldn't rule that out in the future.
Curtis: MIT recently introduced a big-data measure of inflation called the Billion Prices Project. Can you see a big future in big data as a measure of inflation?
Varian: Yes, I think so. I know there are also projects looking at supermarket scanner data and the like. One difficulty with online data is that it leaves out gasoline, electricity, housing, large consumer durables, and other categories of consumption. On the other hand, it is quite good for discretionary consumer spending. So I think that online price surveys will enable inexpensive ways to gather certain sorts of price data, but it certainly won't replace existing methods.
By Mark Curtis, a visiting scholar in the Atlanta Fed's research department