macroblog

About


The Atlanta Fed's macroblog provides commentary on economic topics including monetary policy, macroeconomic developments, financial issues and Southeast regional trends.

Authors for macroblog are Dave Altig and other Atlanta Fed economists.


« June 2010 | Main | August 2010 »

July 30, 2010


Some observations regarding interest on reserves

One of the livelier discussions following Federal Reserve Chairman Ben Bernanke's testimony to Congress on monetary policy has revolved around the issue of the payment of interest on bank reserves. Here, for what it's worth, are a few reactions to questions raised by that discussion:

Is interest paid on reserves (IOR) a free lunch?

Ken Houghton has the following objection:

"… in September of 2008, the Fed decides to pay interest on reserves—including Excess Reserves. The banks can now make 25 times what they pay in interest, risk-free, just by holding onto money. The Fed is, essentially, leaving $100 bills on the sidewalk."

I'm not sure exactly where the "25 times" comes from, but it seems to me that the most obvious transaction would be to borrow in the overnight interbank lending market—the federal funds market—and then "lend" those funds to the Fed by placing them in the Fed's deposit facility. The differential between the return on those options is a good deal lower than a multiple of 25.

073010a
(enlarge)

In fact, as many have noted before, the puzzle is why the gap between the funds rate and the deposit rate exists at all. As explained on the New York Fed's FAQ sheet:

"With the payment of interest on excess balances, market participants will have little incentive for arranging federal funds transactions at rates below the rate paid on excess. By helping set a floor on market rates in this way, payment of interest on excess balances will enhance the Desk's ability to keep the federal funds rate around the target for the federal funds rate."

It didn't quite work out that way, so clearly there is a limit to arbitrage. But if you really think that an 8 basis point spread between the effective funds rate and the deposit rate is a problem, the best approach would be, in my opinion, to address the institutional arrangements that are limiting arbitrage in the funds market. (Some of those features are discussed here and here.)

 What is the opportunity cost of not lending?

That said, certainly the real issue about the IOR policy concerns the presumed incentive for banks to sit on excess reserves rather than putting those reserves into use by creating loans. This, from Bruce Bartlett, is fairly representative of the view that IOR is, at least in part, to blame for the slow pace of credit expansion in the United States:

"… As I pointed out in my column last week, banks have more than $1 trillion of excess reserves—money that the Fed has created that banks could lend immediately but are just sitting on. It's the economic equivalent of stuffing cash under one's mattress.

"Economists are divided on why banks are not lending, but increasingly are focusing on a Fed policy of paying interest on reserves—a policy that began, interestingly enough, on October 9, 2008, at almost exactly the moment when the financial crisis became acute."

OK, but the spread that really matters in the bank lending decision is surely the difference between the return on depositing excess reserves with the Fed versus the return on making loans. In fairness, it does appear that this spread dropped when the IOR was raised from its implicit prior setting of zero…

073010b
(enlarge)


… but it's also pretty clear that this development largely reflects a general fall in market yields post-October 2008 as much is it does the increase in IOR rate:


073010c
(enlarge)

And here's another thought: As of now, the IOR policy applies to all reserves, required or excess. Consider the textbook example of a bank that creates a loan. In the simple example, a bank creates a loan asset on its book by creating a checking account for a customer, which is the corresponding liability. It needs reserves to absorb this new liability, of course, so the process of creating a loan converts excess reserves into required reserves. But if the Fed pays the same rate on both required and excess reserves, the bank will have lost nothing in terms of what it collects from the Fed for its reserve deposits. In this simple case, the IOR plays no role in determining the opportunity cost of extending credit.

Of course, the funds created in making a loan may leave the originating bank. Though reserves don't leave the banking system as a whole, they may certainly flow away from an individual institution. So things may not be as nice and neat as my simple example.  But at worst, that just brings the question back to the original point: Is the 25 basis point return paid by the central bank creating a significant incentive for banks to sit on reserves rather than lend them out to consumers or businesses? At least some observers are skeptical:

"Barclays Capital's Joseph Abate…noted much of the money that constitutes this giant pile of reserves is 'precautionary liquidity.' If banks didn't get interest from the Fed they would shift those funds into short-term, low-risk markets such as the repo, Treasury bill and agency discount note markets, where the funds are readily accessible in case of need. Put another way, Abate doesn't see this money getting tied up in bank loans or the other activities that would help increase credit, in turn boosting overall economic momentum."

Are there good reasons for paying interest on reserves?

Even if we concede that there is some gain from eliminating or cutting the IOR rate, what of the costs? Tim Duy, quoting the Wall Street Journal, makes note (as does Steve Williamson) of the following comment from the Chairman:

"… Lowering the interest rate it pays on excess reserve—now at 0.25%—could create trouble in money markets, he said.

" 'The rationale for not going all the way to zero has been that we want the short-term money markets, like the federal funds market, to continue to function in a reasonable way,' he said.

" 'Because if rates go to zero, there will be no incentive for buying and selling federal funds—overnight money in the banking system—and if that market shuts down … it'll be more difficult to manage short-term interest rates when the Federal Reserve begins to tighten policy at some point in the future.' "

Professor Duy interprets this as aversion to the possibility that "the failure to meet expectations would be the real cost to the Federal Reserve," but I would  have taken the words for exactly what they seem to say—that the skills and infrastructure required to maintain a functioning federal funds rate might atrophy if cutting the rate to zero brings activity in the market to a trickle. And that observation is relevant because of the following, from the minutes of the April 27–28 meeting of the Federal Open Market Committee:

"Meeting participants agreed broadly on key objectives of a longer-run strategy for asset sales and redemptions. The strategy should be consistent with the achievement of the Committee's objectives of maximum employment and price stability. In addition, the strategy should normalize the size and composition of the balance sheet over time. Reducing the size of the balance sheet would decrease the associated reserve balances to amounts consistent with more normal operations of money markets and monetary policy."

"Normal" may not mean the exact status quo ante, but to the extent that federal funds targeting is a desirable part of the picture, it sure will be helpful if a federal funds market exists.

Even if you don't buy that argument—and the point is debatable—it is useful to recall that the IOR policy has long been promoted on efficiency grounds. There is this argument for example, from a New York Fed article published just as the IOR policy was introduced:

"… reserve balances are used to make interbank payments; thus, they serve as the final form of settlement for a vast array of transactions. The quantity of reserves needed for payment purposes typically far exceeds the quantity consistent with the central bank's desired interest rate. As a result, central banks must perform a balancing act, drastically increasing the supply of reserves during the day for payment purposes through the provision of daylight reserves (also called daylight credit) and then shrinking the supply back at the end of the day to be consistent with the desired market interest rate.

"… it is important to understand the tension between the daylight and overnight need for reserves and the potential problems that may arise. One concern is that central banks typically provide daylight reserves by lending directly to banks, which may expose the central bank to substantial credit risk. Such lending may also generate moral hazard problems and exacerbate the too-big-to-fail problem, whereby regulators would be reluctant to close a financially troubled bank."

Put more simply, one broad justification for an IOR policy is precisely that it induces banks to hold quantities of excess reserves that are large enough to mitigate the need for central banks to extend the credit necessary to keep the payments system running efficiently. And, of course, mitigating those needs also means mitigating the attendant risks.

That is not to say that these risks or efficiency costs unambiguously dominate other considerations—for a much deeper discussion I refer you to a recent piece by Tom Sargent. But they should not be lost in the conversation.

By Dave Altig, senior vice president and research director at the Atlanta Fed

July 30, 2010 in Banking, Federal Reserve and Monetary Policy, Monetary Policy | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef013485e1be10970c

Listed below are links to blogs that reference Some observations regarding interest on reserves:

Comments

Is there some institutional constraint the forces IOR to be a multiple of 25 bps? Are holding steady and going to zero the only options?

Posted by: Andy Harless | July 31, 2010 at 01:41 PM

You refer to “…the presumed incentive for banks to sit on excess reserves rather than putting those reserves into use by creating loans.” That statement is incompatible with the widely accepted view that banks just don’t lend reserves. That is, where a bank sees a profitable lending opportunity, it just creates money out of thin air, e.g. it credits the borrower’s account. And in the current “excess reserve” scenario, the bank presumably has more than enough reserves, thus the latter are irrelevant.

In more normal times, that is where the banking system does not have excess reserves, reserves are still irrelevant. That is, a bank which sees a profitable lending opportunity goes ahead (as above) and credits its customer’s account. If that leaves the banking system short of reserves, the FED is then forced to supply extra reserves to the system, else interest rates are forced up.

Posted by: Ralph Musgrave | August 01, 2010 at 01:58 PM

IORs were originated by the same people that think commercial banks are financial intermediaires (intermediary between saver and borrower). Never are the CBs intermediaries in the lending process.

The money supply historically has never been, and can never be, managed by any attempt to control the cost of credit.

IORs are a credit control device. They are the funcational equivalent to required reserves. I.e., the BOG determines when the member banks can lend and invest.

This discussion is complete nonsense. If the IORs don't serve a purpose, then save the taxpayers some money.

The evidence is extremely clearcut. Burns, Miller, Volcker, Greenspan, & Bernanke have all screwed up using interest rate targets.

(1) Paul Volcker won acclaim for taming the inflation that he alone created.

(2) Bernanke didn't "ease" monetary policy when Bear Sterns 2 hedge funds collapsed. He initiated "credit easing" while continuing with his 25 consecutive months of policy "tightening" that began in Feb 2006. Instead Bernanke waited until Lehman Brothers failed. Bernanke drove this country into a deep depression by himself.

(3) Greenspan never "tightened" monetary policy towards the end of his term. Despite raising the FFR 17 times, Greenspan maintained his "loose" money policy, i.e., for the last 41 consecutive months of his term.

By using the wrong criteria (interest rates, rather than member bank reserves) in formulating and executing monetary policy, the Federal Reserve became an engine of inflation and a doomsday machine.

Posted by: flow5 | August 01, 2010 at 04:07 PM

I'm curious how a comparison of US spreads and Swedish spreads would look, given that the Swedish central bank has introduced a penalty on reserves. If I follow the arguments your presenting, the Swedish policy should not have had much effect on spreads, but would be driving liquidity into non-reserve forms, potentially undermining money market institutions. Can any of these expected outcomes actually be observed?

Posted by: Rich C | August 02, 2010 at 03:36 PM

The reason the fed funds rate can be below the IROR is that insurance companies and GSEs have access to the fed funds market but are not holders of deposits at the Fed. Hence they are willing lenders in the fed funds market at a rate below IROR, whereas the banks are the borrowers who then deposit these funds as reserves on which the Fed pays interest.

Posted by: emsoly | August 04, 2010 at 07:39 AM

Why wouldn't the FED charge negative interest on these reserves to spur lending? Thanks.

Posted by: Jonathan Herbert | August 06, 2010 at 02:56 AM

Even Dave Altig is too harsh on IOR here:
"OK, but the spread that really matters in the bank lending decision is surely the difference between the return on depositing excess reserves with the Fed versus the return on making loans."
This ignores the fact that before the IOR policy reserves were artificially scarce, and this scarcity yield was approximately equal to fed funds rate, so net spreads were actually narrower before October 2008.

More here:
http://themoneydemand.blogspot.com/2010/08/should-fed-stop-paying-interest-on.html

Posted by: The Money Demand Blog | August 06, 2010 at 05:35 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 28, 2010


The money-inflation connection: It's baaaack!

Our St. Louis Fed colleague David Andolfatto declares it is time to bury the old saw that says when it comes to inflation, follow the money:

"One of the ideas that stuck in my head as an undergrad was the proposition that 'inflation is always an[d] everywhere a monetary phenomenon.' The idea is usually formalized by way of the Quantity Theory of Money (QTM)—or more precisely—the Quantity of Money Theory of the Price-Level. (QTM is not a theory of money, it is a theory of the price-level).

"In its simplest version, the QTM asserts that the equilibrium price-level is roughly proportional to the outstanding supply of money (however defined). As inflation is the rate of change in the price-level, the phenomenon of inflation is attributed primarily to excessive growth in the money supply (typically viewed as being controlled by the monetary or fiscal authority)."

Andolfatto goes on to note that the monetary base—the sum of currency in circulation and the banks' reserve balances held at the Federal Reserve (at that page, search "reserves")—more than doubled since fall 2008, while the rate of inflation fell.

That's certainly true, though most versions of the quantity theory applied to monetary policy discussions lean on broader measures of money—for no better reason than those measures help the theory fit the facts. Specifically, since the 1980s the phrase "inflation is everywhere and always a monetary phenomenon" has in effect meant "inflation is everywhere and always a monetary phenomenon when we measure money by M2."

And here's an interesting thing. If you look at the relationship between M2 growth and core inflation over the past decade and a half, it appears that the money-inflation nexus has been gaining in strength:

072810a
(enlarge)

Another way to see this relationship is to look at the correlation between M2 growth and core inflation over rolling 10-year windows:

072810b
(enlarge)

Could it be that the death of the quantity theory has been greatly exaggerated?

There are plenty of reasons to be cautious. For one thing, it is oft-noted that any connection between money and inflation could be purely coincidental. In fact, if you stare hard at the picture it does appear that changes in inflation often precede changes in money growth. One interpretation is that the same factors that push trend inflation around also result in responses by policymakers or private market participants that ultimately cause the money supply to move in a sympathetic direction.

But even if causation does run from money to prices, the case is not quite solved. The monetary base measure that the Andolfatto post emphasizes has a lot to recommend itself, not least being that it is the measure of money that central banks actually control. The stark disconnect between the growth in currency and bank reserves (the quantity of which is determined by the Fed) and M2 growth (the quantity of which is determined by the decisions of banks to expand their balance sheets) raises legitimate questions about how policymakers would exploit an M2-inflation connection in an environment when the monetary base–M2 connection—the so-called "money multiplier"—has changed so dramatically.

There could be lots of answers to that question. The relatively new Federal Reserve policy of paying interest on bank reserves is one possibility. Andolfatto's suggestion that all changes in money are not created equal might contain the germ of another explanation. For our part, we think the question is quite a bit more than academic.

By Dave Altig, senior vice president and research director at the Atlanta Fed, and Brent Meyer, economic analyst at the Cleveland Fed

July 28, 2010 in Federal Reserve and Monetary Policy, Inflation, Monetary Policy | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef013485caed7c970c

Listed below are links to blogs that reference The money-inflation connection: It's baaaack!:

Comments

I thought that the old "monetary aggregates" version of the quantity theory -- where something like M2 is taken as an exogenous determinant of inflation -- was pretty much discredited by the policy experiment in the early 1980's. The Fed tried to control M1 (which should be easier to control than M2) and found that (1) it couldn't and (2) the correlation became much weaker. It's not a very convincing response just to say they were trying to control the wrong aggregate.

Posted by: Andy Harless | July 28, 2010 at 05:36 PM

David:

How does the 12 month rolling change in consumer credit and mortgages effect the relationship between M2 and core CPI? Do they weaken or strengthen the correlation historically?

Posted by: Bryan Byrne | July 28, 2010 at 06:18 PM

"There could be lots of answers to that question."

Looking forward to your follow-up on this, though didn't you kind of get started immediately prior:

...."growth in currency and bank reserves (the quantity of which is determined by the Fed) and M2 growth (the quantity of which is determined by the decisions of banks to expand their balance sheets)" ?

Throw in a bit of variable V and nowhere near full employment Y, and I can't wait to read it!

Posted by: apj | July 29, 2010 at 09:19 AM

There is a problem with the proposed interpretation of the money-inflation link illustrated in the upper Figure. M2 lags behind CPI by quarters. Effectively, CPI goes its way and M2 is adjusted to inflation. This is a pure artificial consequence of monetary policy.
The link implies that M2 drives CPI, what contradicts observations.

So, if the authorities decide not to follow inflation, the link will disappear. But inflation goes its own way, very likely M2-independent one.

Posted by: kio | July 30, 2010 at 05:23 AM

I actually had a former Fed governor tell me that the reason former monetarist Greenspan was ok with interest rate targeting was because they basically lost track of velocity due to rapid changes in banking, like ATMs and so forth. Now perhaps things are stable...this should give impetus to returning to a rule based policy rather than the failed practice of interest rate timing, such as advocated by Krugman (http://blog.mises.org/10153/krugman-did-cause-the-housing-bubble), which ex post has proven to have been disastrous.

Posted by: pete | July 30, 2010 at 10:27 AM

Nothing has changed. Monetarism has never been tried.

Monetary policy objectives should not be in terms of any particular rate or range of growth of any monetary aggregate. Rather, policy should be formulated in terms of desired rates-of-change (roc’s) in monetary flows (MVt) relative to roc’s in real GDP.

Where, money is the measure of liquidity, the yardstick by which the liquidity of all other assets is measured. And the transactions velocity is money actually exchanging hands (the G.6 metric).

Nominal GDP is the product of monetary flows (M*Vt) (or aggregate monetary demand), i.e., our means-of-payment money (M), times its transactions rate of turnover (Vt).

To: anderson@stls.frb.org
Subject: As the economy will shortly change, I wanted to show this to you again - forecast:
Date: Wed, 24 Mar 2010 17:22:50 -0500

Dr. Anderson:

It's my discovery. Contrary to economic theory and Nobel Laureate Milton Friedman, monetary lags are not "long & variable". The lags for monetary flows (MVt), i.e., the proxies for (1) real-growth, and for (2) inflation indices, are historically (for the last 97 years), always, fixed in length. However the lag for nominal gdp varies widely.

Assuming no quick countervailing stimulus:

2010
jan..... 0.54.... 0.25 top
feb..... 0.50.... 0.10
mar.... 0.54.... 0.08
apr..... 0.46.... 0.09 top
may.... 0.41.... 0.01 stocks fall

Should see shortly. Stock market makes a double top in Jan & Apr. Then real-output falls from (9) to (1) from Apr to May. Recent history indicates that this will be a marked, short, one month drop, in rate-of-change for real-output (-8). So stocks follow the economy down.

Posted by: flow5 | August 02, 2010 at 10:49 PM

That's the beauty of monetary flows (lags), i.e., our means-of-payment money X its transactions rate of turnover. The economy turns when the numbers turn, not before or after (i.e., it's too soon & stocks should now be rising per MVt).

The proxies for real-output or inflation indices historically (for the last 97 years), oscillated at matching lengths. Money flows signaled the April month-end decline.

The upcoming drop in the proxy for inflation(-48): Sept month-end until Jan month-end.

The upcoming drop in the proxy for real-output(-10): Aug month-end until Feb month-end.

The FED capitulated in May. That is, the FED offset half of the decline in (MVt) after it started to decline. The FED could easily offset all of the drop in real-output this time around.

However there is no way that it can stop the catastrophic drop in inflation -- even with QE2.

Posted by: flow5 | August 02, 2010 at 10:54 PM

your graph seems to indicate the causality is the opposite, that cpi is leading m2 rather than vice versa as would be predicted by the theory. this is not a new observation, economists have been writing since the 1950s that the evidence tends to support the economy driving money growth.

Posted by: ts | August 05, 2010 at 01:04 PM

M3 is all that matters, period. Without lending...

Posted by: warren | August 07, 2010 at 11:39 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 21, 2010


Gauging the inflation expectations of business

Last Friday, the U.S. Bureau of Labor Statistics (BLS) reported that the consumer price index (CPI) declined in June for the third consecutive month. And although core inflation edged up a bit, the entire increase can be accounted for by the BLS's seasonal adjustment factor. In an environment of "business-not-as-usual" like today, data driven by seasonal adjustment are certainly suspect. So overall, the June CPI news seems largely in line with the downward inflation trend we've been seeing for a while.

Does recent disinflation imply deflation? Well, that wouldn't be the consensus coming out of the June 22–23, 2010, FOMC meeting minutes:

"A broad set of indicators suggested that underlying inflation remained subdued and was, on net, trending lower,… However, inflation expectations were seen by most participants as well anchored, which would tend to curb any tendency for actual inflation to decline."

A similar sentiment was expressed recently by European Central Bank (ECB) President Jean-Claude Trichet in describing the ECB's view on inflation expectations:

"Inflation expectations remain firmly anchored in line with our aim of keeping inflation rates below, but close to, 2% over the medium term."

Of course, how firmly something is anchored has meaning only relative to the forces working to move that anchor. Being well anchored against a five-knot drift isn't exactly the same as being well anchored against a 10-knot current. But assuming the idea here is that expectations are likely to hold against the usual range of events one might expect in an environment like ours, we can ask the question: How does one judge whether expectations are well anchored?

Presuming this analogy, one way we might gauge how anchored inflation expectations are is to monitor the behavior of inflation expectations relative to recent shocks. By this standard, expectations seem rock-solid. Virtually every measure of inflation expectations has held steady against the tug of widely fluctuating commodity prices, persistent retail disinflation, expansion of the central bank's balance sheet, large current and projected fiscal imbalances, and the general economic and financial volatility of the past few years.

But economists know very little about how expectations are formed and, therefore, we don't know what sorts of events are likely to pose the greatest threats to the expectations' anchor. In other words, we may not know when inflation expectations are likely to move until, well, they actually move.

In an attempt to get a more direct read of inflationary sentiment and to put more light on how inflation expectations are formed, the Federal Reserve Bank of Atlanta is looking into polling businesses about their inflation expectations. With help from the folks at Kennesaw State University (a very big hat-tip to Don Sabbarese and Dimitri Dodonova, who compile the Georgia and Southeast Purchasing Managers' Indexes) we asked a group of purchasing managers a handful of questions related to the inflation outlook. The poll was conducted during the week of July 7–July 13, and 32 respondents answered the call. Here's what we learned.

Over the next 12 months, this sample of purchasing managers expects unit costs to increase 1.7 percent, just a shade higher than the consensus CPI forecast of economists. The distribution of the poll responses is represented by the red bars in the chart below. About half of the respondents saw unit costs rising "somewhat" defined by the range of 2 percent to 4 percent, while about one-third of the respondents indicated they expect virtually no change in unit costs over the period.

But what probability do respondents attach to their expectations? It turns out that some respondents have great confidence in their expectation for unit cost changes—they assigned little chance that unit labor costs would do anything other than what they forecast. But most purchasing managers attached a significant likelihood to a large range of possible outcomes. We show the distribution of the average respondents' expectation for unit costs by the blue bars in the chart. So, keeping in mind that the mean expectation of the group was for unit costs to rise 1.7 percent, respondents on average assigned a 17 percent chance that unit costs could decline over the coming year, while they put an equally large likelihood of inflation at 5 percent or more (20 percent).

072110
(enlarge)

What does all this mean for the inflation outlook? Well, first, let us caution that a sample of this size doesn't lend itself to any strong conclusions, and these data will have to be carefully evaluated in light of other poll questions and against other benchmarks. Those important caveats aside, we can say that while the average purchasing manager in our poll is expecting price pressures that pretty closely correspond to the Federal Reserve's long-term inflation projection, this group attaches significant upside and downside risks to the inflation outlook.

Have any thoughts about how we proceed from here? We'd love to hear your ideas. The next poll will be sent to potential respondents in about three weeks.

By Mike Bryan, vice president and senior economist, and Laurel Graefe, senior economic research analyst, both at the Atlanta Fed

 

July 21, 2010 in Business Inflation Expectations, Inflation | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef0133f2731aac970b

Listed below are links to blogs that reference Gauging the inflation expectations of business:

Comments

I worry that an anchor is the wrong metaphor. At least according to my nautically ignorant layman's impression of how an anchor works, it either holds or it doesn't, it tends to be dislodged with a jerk, and once you're no longer anchored, you're entirely adrift. My impression of the 1960's and 1970's is that we pulled up anchor quite slowly (not as I imagine would happen with an actual anchor), as an accumulation of experience gradually instilled doubts. I fear that the same thing will happen in reverse this time around. As excess supply persists, people will be repeatedly surprised and gradually revise their expectations.

Having said that, though, I don't find respondents' uncertainty to be particularly worrisome. What we really need to know is not how certain they are of the immediate future but how their subsequent forecasts would be revised in response to surprises. Do they view the possibilities for the next 12 months as a potentially large but still temporary disturbance, or are they uncertain about their longer-range outlook and likely to revise it based on a new observation? One way to get at this issue would be ask directly about longer range expectations. If they're highly uncertain about, e.g. what will happen at a 5-year horizon, that would suggest that their expectations at a 1-year horizon are subject to revision.

Posted by: Andy Harless | July 21, 2010 at 07:26 PM

Unfortumately for everybody, including economists themselves, inflation has a different driving force. If to translate into the terms of the mainstream economics, inflation expectations are driven by the overall change in the level of labor force, LF. For the USA, price inlfation, π(t) is as followws:

π(t) = 4.0dLF(t-2)/LF(t-2) - 0.03

The labor force change leads inflation by 2 years. One can predict from current data at a two year horizon. In 2005, we used the 2004 CBO (and some other) labor force projection and calculated inflation ten years ahead. The years between 2006 and 2009 are acurately predicted. See figure - http://mechonomic.blogspot.com/2010/03/sure-disinflation-continues.html

Details for the USA and other developed countries, where the same relationship is working precisely, are published in:

Dynamics of Unemployment and Inflation in Western Europe: Solution by the 1-D Boundary Elements Method, Journal of Applies Economic Sciences, 2010, v. V, issue 2(12), 94-113 (http://www.jaes.reprograph.ro/)

So, an extended deflationary period is approaching the US. One may interpret that prediction as inflation expectations are below zero in the long run.

Posted by: kio | July 22, 2010 at 02:41 AM

It would be helpful if the respondents took apart their reasoning. For example, those assigning probability to higher inflation may have focused on energy prices or on a dollar crisis of some sort. The former is a traditional issue that we can quantify while the latter is a fear that exists currently despite market quantification that says this isn't going to happen. That kind of question gives greater depth.

Posted by: jomiku | July 22, 2010 at 11:52 AM

I saw that the latest earnings announcements had some great profit growth, but revenue wasn't looking as good. These cost cutting strategies won't work in the long term, but I don't see purchases ramping up too soon.

Posted by: Ben H. | July 22, 2010 at 12:59 PM

Inflation expectations are very tricky.

Looking at commodity prices is one way.

Government fiscal policy might be a good guide. But it's conflicting too. Next year, massive tax increases are disinflationary. The increased spending is inflationary! Quite a quandary.

Have to focus on economic activity. But I would not look at YOY numbers. I'd throw out 2008-09, and compare a mean average of 2005-2007. See what you come up with.

Posted by: Jeff | July 23, 2010 at 12:22 PM

Wouldn't Economists' time be better spent working to establish an inflation measurement that correlates to an actual population sampling? This is 2010 folks, absurd inflation arguments died with the Economists' reasonings for denying the Housing Bubble.

Posted by: bailey | July 23, 2010 at 02:16 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 16, 2010


A curious unemployment picture gets more curious

UPDATE: One of our eagle-eyed macroblog readers thought something was fishy-looking in the second chart of yesterday's (July 15) post. He was right—the chart was in error. This post is an updated, edited version with the erroneous chart replaced. There have also been some text revisions to better reflect the revised chart. The new text is bolded in this post.

At first blush, the second quarter statistics from the Job Openings and Labor Turnover Survey (commonly referred to as JOLTS and released Tuesday by the U.S. Bureau of Labor Statistics) suggest little has changed recently in U.S. labor markets:

"There were 3.2 million job openings on the last business day of May 2010, the U.S. Bureau of Labor Statistics reported today. The job openings rate was little changed over the month at 2.4 percent. The hires rate (3.4 percent) was little changed and the separations rate (3.1 percent) was unchanged."

Despite a slight step backward in May, the overall trend in job openings has been positive—Calculated Risk has the picture—but in a sense this fact has just deepened the puzzle of why the unemployment rate is so darn high. As we wrote in the first quarter issue of the Atlanta Fed's EconSouth:

"The disconnect between the supply of and demand for workers that is reflected in statistics such as the unemployment rate, the hiring rate, and the layoff rate can be dynamically expressed by the Beveridge curve. Named after British economist William Beveridge, the curve is a graphical representation of the relationship between unemployment (from the BLS's household survey) and job vacancies, reflected here through the JOLTS."

Since the second quarter of last year, the unemployment rate has far exceeded the level that would be predicted by the average correlation between unemployment and job vacancies over the past decade. Tuesday's report indicates that the anomaly only deepened in the first two months of the second quarter.

071510a
(enlarge)

The dashed line in the chart above, which is estimated from the data from 2000–08, represents the predicted relationship between the number of unemployed persons in the United States and the number of job openings. That simple relationship would suggest that, given the average number of job openings in April and May, the unemployed would be expected to number about 10.4 million—not the nearly 15 million we actually saw.

Some analysts have suggested the unemployment benefits policies of the last couple of years may be responsible for abnormally high unemployment rates. Estimates generated by several researchers in the Federal Reserve—here and here, for example—suggest that extended unemployment benefits may have increased the unemployment rate by somewhere between 0.4 and 1.7 percentage points. But even if we accept those numbers and adjust the Beveridge curve by assuming that the number of unemployed would be correspondingly lower without the benefits policy, it's not clear that the puzzle is resolved:

071510b_rev
(enlarge)

If you tend to believe the higher end of the benefits-bias estimates, no puzzle emerges until the second quarter of 2010. And, of course, some estimates apparently deliver an even larger impact of the extended benefits policy. Let's call the question unsettled at this point.

The most tempting explanation for the seeming shift in the Beveridge curve relationship (to me, anyway) is a problem with the mismatch between skills required in the jobs that are available and skills possessed by the pool of workers available to take those jobs. The problem with this tempting explanation is that it is not so clear that the usual sort of structural shifts we might point to—for example, only nursing jobs being available to laid-off construction workers—are so obviously an explanation (an issue we explored in a previous macroblog post).

But these sorts of subplots may miss the truly big part of the story. I have noticed a recent spate of articles repeating a theme we hear anecdotally from many sources, in many industries. For example, this from a June USA Today article

"…the [auto] industry is poised to add up to 15,000 this year and could need up to 100,000 new workers a year from 2011 through 2013.

"…Automakers need workers with more and different skills than in the past on the factory floor.… Among priorities: computer skills and the ability to work with less supervision than their predecessors. That likely means education beyond high school."

… or more recently, this one from the New York Times:

"Factory owners have been adding jobs slowly but steadily since the beginning of the year, giving a lift to the fragile economic recovery…

"Yet some of these employers complain that they cannot fill their openings.

"Plenty of people are applying for the jobs. The problem, the companies say, is a mismatch between the kind of skilled workers needed and the ranks of the unemployed."

Now I realize that a few anecdotes don't make facts, but I have been in more than a few conversations with businesspeople who have claimed that the productivity gains realized in the United States throughout the recession and early recovery reflect upgrades in business processes—bundled with a necessary upgrade in the skill set of the workers who will implement those processes. This dynamic suggests that the shift in required skills has been concentrated within individual industries and businesses, not across sectors or geographic areas that would be captured by our most straightforward measures of structural change.

The data necessary to test this proposition are not easy to come by. That challenge is unfortunate, because the return on figuring out what is beneath those Beveridge curve graphs is very high.

By Dave Altig, senior vice president and research director at the Atlanta Fed

July 16, 2010 in Data Releases, Employment, Labor Markets, Productivity | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef0133f24fc5f0970b

Listed below are links to blogs that reference A curious unemployment picture gets more curious:

Comments

Could you describe a bit how you created the second figure? Here's how I would have thought the chart would be constructed. If the unemployment is currently about 10%, and is overstated by 2.5 percentage points, then the number of unemployed workers should actually be lower by 1/4. It looks like there are currently 15000 unemployed, so the number for 2010q2 should fall to 11,250, in which case it would be on the beveridge curve. Is it possible that you cut the number of unemployed by 2.5 percent, instead of the 2.5 percentage points, or am I just misunderstanding?

Posted by: Ian Dew-Becker | July 16, 2010 at 08:52 AM

1. Manufacturing employment has been falling since at least 2000 (http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?series_id=CES3000000001&data_tool=XGtable ). Compared to that lose of 6 million workers, hiring another 100,000 over 3 years is a tiny number; it seems unlikely that the auto industry really can't find enough qualified people in that pool. Worst case, if none of the skilled workers are unemployed, they'd have to pay a bit more to get people to switch from existing jobs (which could then perhaps be filled by the unemployed).

2. Alternatively, if the mismatch in skills covers a sufficiently large set of workers, and companies are that desperate to hire people, one would presume that they would invest in training. From the companies' point of view this would be equivalent to a higher compensation expense, except that some of the money would end up enhancing worker skills instead of going into their pockets.


So, either companies are irrationally unwilling to pay enough to get the workers they need, or they feel there is not enough demand for their products for them to be able to pay more for workers. I'd take a little from column A, a lot from column B.

Posted by: Itamar Turner-Trauring | July 16, 2010 at 10:56 AM

Maybe a better explanation is that the JOLTS will never go to zero (unless an asteroid hits) :). The numbers of unemployed is a function of the number of hires, but also a function of the number of terminations. During much of 2009, the economy was shedding a lot of jobs from some sectors but not others. The JOLTS was reflecting the recession proof areas of the economy ONLY in 2009 and could not go lower because companies that are shedding jobs cannot have negative hires. Job loss has not returned to its pre-recession level of 300-350K per week and is at a much higher range of 420-460 per week. If one is looking for a shift in the Beveridge curve, then a higher rate of job loss will require a higher rate of new hires to arrive at a level of employment- a shift in the curve. A shift in the rate of job turnover will shift the Beveridge curve.

It has been noted that the Beveridge curve has been subject to shifts in the past and the cause is debated. (See Dickens:

http://www.brookings.edu/papers/2009/07_unemployment_dickens.aspx

and references therein. Dickens has a nice figure showing previous shifts in the curve.

Also note in using the JOLTS data, the correlation is more linear if the JOLTS and unemployment numbers are offset by 1 quarter. (JOLTS is a leading indicator of unemployment). That is, unemployment number more closely tracks the JOLTS of the previous quarter than the current quarter. The linear relationship obviously breaks down at higher levels of unemployment. JOLTS will asymptote at a non-zero level. A better fit occurs if the JOLTS is plotted against the log of the unemployment number. The need for an explanation of the departure from the linear relationship may have nothing to do with a shift in the curve. It is likely that unemployment in this recession is so high that we have dropped below the linear portion of the Beveridge curve and onto the tail.

Posted by: jonny bakho | July 16, 2010 at 10:24 PM

I agree that there has to be a (Kurzweil) acceleration in the changes in technology that are affecting employment.

But, could the really big story be that employers still prefer China?

800K employees, 62Bln in revenues at Foxconn makes one wonder.

Posted by: Joe Rotger | July 17, 2010 at 10:03 AM

This anomaly might just come under the heading of Beveridge curve dynamics. (I wouldn't want to declare a shift in the curve based on just one observation.) Presumably it takes time to fill vacancies. (Even in a weak economy, it takes time, because employers have more applicants to process.) So an exogenous increase in vacancies should only gradually be reflected in the unemployment rate.

The size of the jump is pretty striking, though, and it's consistent with what I've seen in other indicators (e.g., the Monster Employment Index rising 21% over 12 months). Assuming this represents a real Beveridge curve shift, it would seem to be a reversal of the trend of the past 25 years, where the Beveridge curve (as best we can tell from available data) has been shifting inward. It might mean an increase in the NAIRU (which under the circumstances may be good news, given the below-target inflation rate and the constraints on macro policy).

I've been studying the Beveridge curve off and on for the past 20 years, and I still haven't found or heard any really convincing explanations for the past shifts. But they do seem to correlate with shifts in the Phillips curve.

Posted by: Andy Harless | July 17, 2010 at 10:35 AM

Wondering if expectations of the future have anything to do with it?
Next year, we get a massive tax increase. I can't help but think that business is figuring in a slow down in activity. http://www.bls.gov/news.release/empsit.t18.htm
This chart shows hours worked is up slightly. Instead of new hires, which are expensive, it's cheaper to pay a little overtime.

I think that unemployment will be over 10% by December.

Posted by: Jeff | July 18, 2010 at 10:03 AM

fifteen million unemployed, hmm, must have graduated from the enron school of economics if u beleve that.the only reason employers can't match skills is simply because they expect u to know it all, walk on water and work like you've been there twenty five years. guess what, never happen.

Posted by: gangsta | July 18, 2010 at 10:27 PM

Not mentioned above, but frequently included in discussions of this sort, is the issue of labor mobility.

Given the state of the housing market, many skilled laborers with homes ... and underwater mortgages ... may be unwilling to relocate to where the jobs are.

Also, dual-income families may be either more abundant than formerly, or less inclined to move for one spouse's job than they were in the past. Given the uncertainty in the economy, it may be that unless both breadwinners can relocate and find jobs, families with one stable income may find it more difficult to uproot themselves and hope for a better life elsewhere.

Posted by: Wisdom Seeker | July 19, 2010 at 01:21 PM

Two other comments:

(1) The unemployment rate is clearly pocketed based on age and skill sets. But perhaps, instead of "needing" more skilled workers (what a B.S. waste of language), businesses ought to be thinking about how to make better use of the available workers?

(2) Employers also appear to be unwilling to take chances on older workers, regardless of skill sets. Alternatively, the older unemployed are holding out for the best possible job offers. (Or maybe they are just the most rooted and unable to move?)

Calculated Risk had a nice guest post on this a week or two ago. The older unemployed have the longest durations of unemployment.

Posted by: Wisdom Seeker | July 19, 2010 at 01:22 PM

Yes, that is quite curious. It is hard to know to what extent the various factors are affecting unemployment.

Posted by: Mark B. | July 19, 2010 at 02:10 PM

Skills-employment match is one factor but IMO not the biggest one. It's the housing market and the new lack of labor mobility. The job openings are not (geographically) in the same place as the older jobs and it is very difficult/expensive to move when your house is underwater. I could give multiple anecdotes from friends. Even when you break-even after the house sale, you are left with little equity for a new house in a new location. Employers rarely cover the loss on the house, but do provide property management help to rent. But this only makes it less of a losing proposition (and renters are a HUGE pain in the A$$).

This could easily show up as a statistical bias against older workers, since younger ones without kids are more likely to rent. Renters are very mobile in this environment as there are lots of vacancies. People who are willing to brave an insufferable lifestyle of long commutes, or willing to rent to lawsuit-happy Angie who is allergic to everything, or party-hearty house-trashing Steve and his college buddies, will find jobs. But otherwise, the era of job mobility is dead until the housing market recovers.

Posted by: dwb | July 21, 2010 at 08:41 AM

Its clear from the anecdotes that folks are looking for people who can learn on their own not sitting in training courses.
Consider the ag sector, running a farm today takes a 4 year degree to really make a go of it, you need to understand finance, as well as agronomy and ag economics. Today a person with 6 years of school could not make a go of it in ag as my grandfather did. Today one has gps based tractors that deliver fertilizer based on the yield of that part of the field. Which is a very advanced set of concepts to understand.

It is not clear that a HS education teaches one how to learn. Note also the ability to work with less supervision as an issue. Perhaps then more term papers and projects in school, and less super tests.

Posted by: Lyle | July 24, 2010 at 05:14 PM

I'm wondering something, I'd like to get your comments.
Perhaps I'm paranoid here - but perhaps the reason the model no longer holds is because there are so few companies due to mergers and acquisitions that a decision by a relatively small handful of leaders of huge corporations creates an anomaly?

Like the guy who just bought up 7% of the world's cocoa supply, sending me out to buy my Hersheys forthwith?

Posted by: Aquarian Analytic | July 27, 2010 at 01:34 AM

I'd argue the sticky unemployment number is a result of American intellectual property regulation restraining economic activity.

Eliminate the last 40 years of intellectual property and copyright regulation and economic activity will increase at the level where most people are employed, small businesses.

There isn't a Fed policy that will increase economic activity when they're at ZIRP already.

Posted by: Tigwelded | July 27, 2010 at 10:57 AM

Let's be clear about one thing: there are 5 people seeking employment for every 1 job opening. So even if people were willing to take on lower paying jobs not utilizing their skills (e.g., when the laid-off astrophysicist ask you "do you want fries with that?") there are still 4 more people who literally cannot get jobs.

Unemployment benefits have nothing to do with the number of people out of work.

Posted by: Tax Lawyer | July 28, 2010 at 07:03 PM

I checked the data on hires from the JOLTS series. The job openings rate is back up to where it was in October 2008, but the hires rate is up to where it was in June 2008. That juxtaposition doesn't seem consistent with the story that firms are having trouble hiring because the unemployed don't have the skills they need. Rather, the failure of unemployment to respond to the increase in job openings would a appear to be due to (1) the persistence of a higher than usual rate of layoffs into the recovery, so that more than the usual rate of hiring is needed to raise employment and (2) the lack of sufficient time for unemployment to respond (i.e. the dynamics of the Beveridge curve).

(By my reading of this chart, the only anomalous observation is the one for Q2 2010. Both visual inspection and the knowledge that both series are absolutely bounded at zero lead me not to put much credence in the linear fit.)

Posted by: Andy Harless | July 31, 2010 at 12:04 AM

Just as in the last recession, my company is systematically targeting, and hiring, "overqualified" apllicants. They don't attrite at a higher rate, but have greater sucess/impact. I know we are not unique. So, could the knock-on effect of this behavior add to the skills mismatch? There is a stickiness, once hired, that takes them out of the market for a period of time.

Posted by: Andrew | August 03, 2010 at 12:45 AM

I looked at the data after reading this post (and Cowen's comments). It's clear that the relationship is non-linear, with the number of job openings associated with a given level of unemployment beginning to rise after the number of unemployed reaches about 12 million...controlling for levels of things, the increase in openings as a percentage of establishment employment begins to increase when unemployment as a percentage of establishment employment reaches about 10.5%.

The problem is determining what this means. And it's a problem because, in the available data series, we only have one observation of the relationship at this level of unemployment. So while this *could* represent the emergence of some structural unemployment at high levels of unemployment, it does not *have* to represent that.

But it's an issue that deserves a fair amount of attention that it is likely to get.

Posted by: Donald A. Coffin | August 03, 2010 at 11:58 AM

It simply takes time between posting a vacancy and employing a person. First you post more vacancies, then you get an increase in employment.
When a recession is small you can't identify deviations from a straight line from the measurement error.
When the recession is big, it's a bigger deviation.
In fact it has always been a counter-clockwise loop around the Beveridge curve. It's been around for decades, and not only in the US.
Why do people always make so much fuss out of long-known things without reading smth on the subject first?

Posted by: chertosha | August 06, 2010 at 11:57 PM

I seem to be ignorant. The beveridge curve indicates that unemployment is decreasing and job openings rising. Isn't that positive?

Posted by: Outsider | August 07, 2010 at 11:51 AM

How many unemployed citizens in States with few job prospects are underwater on their mortgages preventing them from moving to States with more job openings?

Posted by: Ed Herranz | August 09, 2010 at 06:29 PM

Can someone provide a reference for the 1.7 number quoted in the blog post? The two studies he links to give estimates of 0.4 and 0.7, not 1.7. Thanks.

Posted by: James | August 16, 2010 at 11:25 AM

I'm from Metro Detroit. I'm in property management. I do know several people who have been unemployed I would say for about 9 months. 2 with high school education and one with a college degree.

Posted by: excel development | September 22, 2010 at 03:24 AM

One possible explanation for a positive slope in the Beveridge curve is that when employers believe the unemployment rate will stay high for a long time, it changes the psychology of the hiring process. Instead of looking for more general competencies, employers who see the job market as an extreme "buyer's market" begin to look for more specific knowledge. During "normal" times, employers assume that some on-the-job learning is necessary. Because their expectations are raised in the current market, they're more likely to hold out for the "perfect" candidate who's intimately familiar with all of the specific systems the company currently uses.

Posted by: Jonathan | October 19, 2010 at 02:35 AM

It's true, there are a hell of a lot more unemployed than the government can count because of the reasons you've cited; and other reasons as well. I think if we knew the true numbers we would claim a depression and that would send the markets into a tailspin, the government would quickly follow. I believe the numbers are fudged and have been since the 1930s.

Posted by: excel development | October 28, 2010 at 06:52 AM

What is the relationship between the cyclical unemployment rate and the natural rate of unemployment?

Posted by: excel classes | December 04, 2010 at 04:39 AM

If your claim has not expired and you still have money left to use, then you would just reopen the claim. Your last employer is what will be used to determine your eligibility. The weekly benefit amount will stay the same as it was, once you open a claim you are locked into that amount for the life of the claim (no matter if you work a temp job and earn more money).

Posted by: excel classes | August 08, 2011 at 05:00 AM

fifteen million unemployed, hmm, must have graduated from the enron school of economics if u beleve that.the only reason employers can't match skills is simply because they expect u to know it all, walk on water and work like you've been there twenty five years. guess what, never happen.

Posted by: العاب | August 03, 2012 at 12:20 AM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 09, 2010


How close to deflation are we? Perhaps just a little closer than you thought

Since last October, the consumer price index (CPI) has gone up an annualized 0.7 percent. On an ex-food and energy basis, the number is a little lower, at 0.5 percent. And the Cleveland Fed's trimmed-mean and median CPIs, at 0.7 percent and 0.2 percent, respectively, also put the recent trend in consumer prices in pretty low territory.

And this is before we take into account any potential mismeasurement, or "bias," in the construction of the CPI.

How big is the CPI's bias? Well, in 1996, the Social Security Administration commissioned a study on the accuracy of the CPI as a measure of the cost of living. This so-called "Boskin Commission Report" said the CPI was overstated by about 1.1 percentage points per year. The commission identified several sources of potential bias, but about half of the 1.1 percentage points resulted from new products and quality changes that were slow or otherwise imperfectly introduced into the price statistic.

Since that time, the Bureau of Labor Statistics has initiated a number of methodological changes that have reduced the CPI's mismeasurement. In a 2001 paper, Federal Reserve Board economists David Lebow and Jeremy Rudd put the CPI bias at only about 0.6 percentage points. And again, of this amount, the big share of the bias (about 0.4 percentage points) resulted from the imperfect accounting of new and improved goods.

Now, in an article (available to all in its working paper version) appearing in the latest issue of the American Economic Review, Christian Broda and David Weinstein say the earlier estimates of the new goods/quality bias may be a bit understated. The authors examine prices from the AC Nielsen Homescan database and conclude that between 1996 and 2003, new and improved goods biased the CPI, on average, by about 0.8 percentage points per year. If this estimate is accurate, consumer price increases since last October would actually be around zero, or even slightly negative, once we account for the mismeasurement of the CPI caused by new and improved goods.

But (oh, you just knew there was going to be a "but" in here, right?) the authors also point out that, because new goods are introduced procyclically, this bias tends to be larger during expansions and smaller during recessions. In other words, given the severity of the recession and the modest pace of the recovery, there may not be a whole lot of innovation going on right now in consumer goods. This is a bad thing for consumers, of course, but it would be a good thing for the accuracy of the CPI.

By Mike Bryan, vice president and senior economist at the Atlanta Fed

July 9, 2010 in Deflation, Inflation | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c834f53ef013485523b02970c

Listed below are links to blogs that reference How close to deflation are we? Perhaps just a little closer than you thought:

Comments

This study is good, and should displace many of the big numbers floating around.

But it does not look at the even bigger problem that the CPI home owners equivalent rent probably significantly understates home prices increases over the last 30 years.

Posted by: spencer | July 09, 2010 at 05:28 PM

Changes in the economy of the upcoming dimension are impossible to not see.

Contrary to economic theory and Nobel Laureate Milton Friedman, monetary lags are not "long & variable". The lags for monetary flows (MVt), i.e., the proxies for (1) real-growth, and for (2) inflation indices, are historically, always, fixed in length.

It is an incontrovertible fact (as it now stands), that nominal gDp will cascade through-out the 4th qtr (down in every month - Oct, Nov, & Dec). And without a slingshot (added monetary & fiscal intervention/stimulus), the economy will never "reach escape velocity".

Stocks will crash by Oct. - at the latest. They will probably correct back to the right shoulder at 6,547. The bottom for this depression is in July 2011. The DOW will bottom c. 3,300.

Posted by: flow5 | July 11, 2010 at 02:43 PM

The idea that the CPI overstates inflation is open to interpretation. Various statistical measures may be applied to show that the CPI does in fact overstate the inflation rate. However, the way the CPI measures inflation is in stark contrast to the perceived value consumers place on the goods and services they purchase. If for instance a new automobile has added one new feature over last years model of that same vehicle and the added worth to that vehicle is $500, and this new feature is standard for the new model, and the price to the consumer for the new model is only up by $200; then the CPI would say that in this case not only was there no inflation, but there was deflation that took place. The consumer purchases the new vehicle thinking that he paid $200 more than he would have paid for last years model and that inflation in this case had risen for the new vehicle. So he pays more for the new vehicle that came with an added feature that came standard (meaning he had no choice on the added feature)and is told by CPI standards that he actually paid $300 less than his actual cost outlay.
This may sound reasonable to an economist, but not to the average consumer.

Posted by: JRB | July 15, 2010 at 10:53 AM

"This may sound reasonable to an economist, but not to the average consumer."

Ya but this is not typically what happens. Usually prices stay constant and features increase or sometimes prices even fall and features increase. I remember buying my 1998 Corolla for more than the current 2010 Corolla and with fewer features. Additionally I know that the quality has improved.

Its obvious this is deflation. In fact its gotten to the point that I try to delay purchases as long as possible because I know the later I buy the higher the quality, the lower the price and the greater the features.

Posted by: assman | July 16, 2010 at 09:21 AM

Hey, what happened to the post on the Beveridge curve? I was going to leave a comment, but it disappeared.

Part of the comment is relevant to this post though: since Beveridge curve shifts are correlated with Phillips curve shifts, if the latest observation really represents a shift, then it may mean we are not as close to deflation as I thought (but I thought we were pretty damn close, even before reading this post). 4 years ago, I never would have thought that an increase in the NAIRU could be good news, but under the circumstances, it might be.

Posted by: Andy Harless | July 16, 2010 at 11:20 AM

The problem in this analysis, of course, is that the CPI is underweight in THE THINGS THAT PEOPLE MUST BUY.

So for me, comfortably middle class, the CPI is probably high. For people who struggle, it is low. The fact that my eventual heart bypass surgery will be a technological marvel will be small consolation for the family of four that barely misses qualifying for public assistance.

Posted by: Robert in Phoenix | July 17, 2010 at 08:34 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


November 2014


Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30            

Archives


Categories


Powered by TypePad