About


The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.


November 29, 2018


Cryptocurrency and Central Bank E-Money

The Atlanta Fed recently hosted a workshop, "Financial Stability Implications of New Technology," which was cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. This macroblog post discusses the workshop's panel on cryptocurrency and central bank e-money. A companion Notes from the Vault post provides some highlights from the rest of the workshop.

The panel began with Douglas Elliot, a partner at Oliver Wyman, discussing some of the public policy issues associated with cryptoassets. Drawing on a recent paper he cowrote, Elliot observed that there are "at least four substantial market segments" that provide long-term support for cryptoassets:

  • libertarians and techno-anarchists who, for ideological reasons, want a currency without a government;
  • people who deeply distrust their government's economic management;
  • seekers of anonymity, who don't want their names associated with transactions and investments; and
  • technical users who find cryptoassets useful for some blockchain applications.

Besides these groups are the speculators and investors who hope to benefit from price appreciation of these assets.

Given the strong interest of these four groups, Elliot argues that cryptoassets are here to stay, but he also asserts that these assets raise public policy issues that regulation should address. Some issues, such as anti–money laundering, are being addressed, but all would benefit from a coordinated global approach. However, he observes that of the four long-term support groups, only the technical users are likely to favor such regulations.

Another paper, by University of Chicago professor Gina C. Pieters, analyzed the extent to which the cryptocurrency market is global using purchases of cryptocurrency by state-issued currencies. She finds that more than 90 percent of all cryptocurrency transactions occur using one of three currencies: the U.S. dollar, the South Korean won, and the Japanese yen. She further finds that the dominance of these three currencies cannot be explained by economic size, financial openness, or internet access. Pieters also observed that transactions involving bitcoin, the largest cryptocurrency by market value, do not necessarily represent a country's cryptomarket share.

Warren Weber, former Minneapolis Fed economist and a visiting scholar at the Atlanta Fed, discussed so-called "stable coins," one type of cryptocurrency. The value of many cryptocurrencies has fluctuated widely in recent years, with the price of one bitcoin soaring from under $6,000 to more than $19,000 and then plunging to just over $6,000—all within the period from October 2017 to October 2018. This extreme price volatility creates a significant impediment to Elliot's technical users who would like some method of buying blockchain services with a currency controlled by a blockchain. In an attempt to meet this demand, a number of "stable coins" have been issued or are under development.

Drawing on a preliminary paper, Weber discussed three types of stable coins. One type backs all of the currency it issues with holdings of a state-issued currency, such as the U.S. dollar. A potential weakness of these coins is that they incur operational costs that require payment. Weber observed that interest earnings might cover part of these expenses if the stable coin issuer holds the dollars in an interest-bearing asset. Additionally, charging redemption fees might offset some or all of the expense.

The other two alternatives involve the creation of cryptofinancial entities or crypto "central banks." Both of these approaches seek to adjust the quantity of the cryptocurrency outstanding to stabilize its price in another currency. However, Weber observed that both of these approaches are subject to the problem that the cryptocurrency could take on many values depending upon people's expectations. If people come to expect that a coin will lose its value, neither of these approaches can prevent the coin from becoming worthless.

The question of whether existing central banks should issue e-money was the topic of a presentation by Francisco Rivadeneyra of the Bank of Canada. Summarizing the results of his paper, Rivadeneyra observed that central banks could provide e-money that looks like a token or a more traditional account. The potential for central banks to offer widely available account-based services has long existed. However, after considering the tradeoffs, central banks have elected not to provide these accounts, and recent technological developments have not changed this calculus. However, new technologies may have changed the tradeoff for token-based systems. Many issues will need to be addressed first, though.

November 29, 2018 in Capital and Investment, Technology | Permalink

Comments

It seems that in order to combat the deficiencies that cryptocurrencies possess. Traditional measures are needed. Would this not take away from the purpose of the block chain system?

Posted by: Christian Ibarra Garcia | December 07, 2018 at 05:10 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

April 28, 2014


New Data Sources: A Conversation with Google's Hal Varian

New Data Sources: A Conversation with Google's Hal Varian

In recent years, there has been an explosion of new data coming from places like Google, Facebook, and Twitter. Economists and central bankers have begun to realize that these data may provide valuable insights into the economy that inform and improve the decisions made by policy makers.

Photo of Hal VarianAs chief economist at Google and emeritus professor at UC Berkeley, Hal Varian is uniquely qualified to discuss the issues surrounding these new data sources. Last week he was kind enough to take some time out of his schedule to answer a few questions about these data, the benefits of using them, and their limitations.

Mark Curtis: You've argued that new data sources from Google can improve our ability to "nowcast." Can you describe what this means and how the exorbitant amount of data that Google collects can be used to better understand the present?
Hal Varian: The simplest definition of "nowcasting" is "contemporaneous forecasting," though I do agree with David Hendry that this definition is probably too simple. Over the past decade or so, firms have spent billions of dollars to set up real-time data warehouses that track business metrics on a daily level. These metrics could include retail sales (like Wal-Mart and Target), package delivery (UPS and FedEx), credit card expenditure (MasterCard's SpendingPulse), employment (Intuit's small business employment index), and many other economically relevant measures. We have worked primarily with Google data, because it's what we have available, but there are lots of other sources.

Curtis: The ability to "nowcast" is also crucially important to the Fed. In his December press conference, former Fed Chairman Ben Bernanke stated that the Fed may have been slow to acknowledge the crisis in part due to deficient real-time information. Do you believe that new data sources such as Google search data might be able to improve the Fed's understanding of where the economy is and where it is going?
Varian: Yes, I think that this is definitely a possibility. The real-time data sources mentioned above are a good starting point. Google data seems to be helpful in getting real-time estimates of initial claims for unemployment benefits, housing sales, and loan modification, among other things.

Curtis: Janet Yellen stated in her first press conference as Fed Chair that the Fed should use other labor market indicators beyond the unemployment rate when measuring the health of labor markets. (The Atlanta Fed publishes a labor market spider chart incorporating a variety of indicators.) Are there particular indicators that Google produces that could be useful in this regard?
Varian: Absolutely. Queries related to job search seem to be indicative of labor market activity. Interestingly, queries having to do with killing time also seem to be correlated with unemployment measures!

Curtis: What are the downsides or potential pitfalls of using these types of new data sources?
Varian: First, the real measures—like credit card spending—are probably more indicative of actual outcomes than search data. Search is about intention, and spending is about transactions. Second, there can be feedback from news media and the like that may distort the intention measures. A headline story about a jump in unemployment can stimulate a lot of "unemployment rate" searches, so you have to be careful about how you interpret the data. Third, we've only had one recession since Google has been available, and it was pretty clearly a financially driven recession. But there are other kinds of recessions having to do with supply shocks, like energy prices, or monetary policy, as in the early 1980s. So we need to be careful about generalizing too broadly from this one example.

Curtis: Given the predominance of new data coming from Google, Twitter, and Facebook, do you think that this will limit, or even make obsolete, the role of traditional government statistical agencies such as Census Bureau and the Bureau of Labor Statistics in the future? If not, do you believe there is the potential for collaboration between these agencies and companies such as Google?
Varian: The government statistical agencies are the gold standard for data collection. It is likely that real-time data can be helpful in providing leading indicators for the standard metrics, and supplementing them in various ways, but I think it is highly unlikely that they will replace them. I hope that the private and public sector can work together in fruitful ways to exploit new sources of real-time data in ways that are mutually beneficial.

Curtis: A few years ago, former Fed Chairman Bernanke challenged researchers when he said, "Do we need new measures of expectations or new surveys? Information on the price expectations of businesses—who are, after all, the price setters in the first instance—as well as information on nominal wage expectations is particularly scarce." Do data from Google have the potential to fill this need?
Varian: We have a new product called Google Consumer Surveys that can be used to survey a broad audience of consumers. We don't have ways to go after specific audiences such as business managers or workers looking for jobs. But I wouldn't rule that out in the future.

Curtis: MIT recently introduced a big-data measure of inflation called the Billion Prices Project. Can you see a big future in big data as a measure of inflation?
Varian: Yes, I think so. I know there are also projects looking at supermarket scanner data and the like. One difficulty with online data is that it leaves out gasoline, electricity, housing, large consumer durables, and other categories of consumption. On the other hand, it is quite good for discretionary consumer spending. So I think that online price surveys will enable inexpensive ways to gather certain sorts of price data, but it certainly won't replace existing methods.

By Mark Curtis, a visiting scholar in the Atlanta Fed's research department


April 28, 2014 in Economics, Forecasts, Technology, Web/Tech | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c834f53ef01a3fcfb87b0970b

Listed below are links to blogs that reference New Data Sources: A Conversation with Google's Hal Varian:

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 01, 2013


Half-Full Glasses

Just in case you were inclined to drop the "dismal" from the "dismal science," Northwestern University professor Robert Gordon has been doing his best to talk you out of it. His most recent dose of glumness was offered up in a recent Wall Street Journal article that repeats an argument he has been making for a while now:

The growth of the past century wasn't built on manna from heaven. It resulted in large part from a remarkable set of inventions between 1875 and 1900...

This narrow time frame saw the introduction of running water and indoor plumbing, the greatest event in the history of female liberation, as women were freed from carrying literally tons of water each year. The telephone, phonograph, motion picture and radio also sprang into existence. The period after World War II saw another great spurt of invention, with the development of television, air conditioning, the jet plane and the interstate highway system…

Innovation continues apace today, and many of those developing and funding new technologies recoil with disbelief at my suggestion that we have left behind the era of truly important changes in our standard of living…

Gordon goes on to explain why he thinks potential growth-enhancing developments such as advances in healthcare, leaps in energy-production technologies, and 3-D printing are just not up to late-19th-century snuff in their capacity to better the lot of the average citizen. To paraphrase, your great-granddaddy's inventions beat the stuffing out of yours.

There has been a lot of commentary about Professor Gordon's body of work—just a few examples from the blogosphere include Paul Krugman, John Cochrane, Free Exchange (at The Economist), Gary Becker, and Thomas Edsall (who includes commentary from a collection of first-rate economists). Most of these posts note the current-day maladies that Gordon offers up to furrow the brow of the growth optimists. Among these are the following:

And inequality in America will continue to grow, driven by poor educational outcomes at the bottom and the rewards of globalization at the top, as American CEOs reap the benefits of multinational sales to emerging markets. From 1993 to 2008, income growth among the bottom 99% of earners was 0.5 points slower than the economy's overall growth rate.

Serious considerations, to be sure, but there is actually a chance that some of the "headwinds" that Gordon emphasizes are signs that something really big is afoot. In fact, Gordon's headwinds remind me of this passage, from a paper by economists Jeremy Greenwood and Mehmet Yorukoglu published about 15 years ago:

A simple story is told here that connects the rate of technological progress to the level of income inequality and productivity growth. The idea is this. Imagine that a leap in the state of technology occurs and that this jump is incarnated in new machines, such as information technologies. Suppose that the adoption of new technologies involves a significant cost in terms of learning and that skilled labor has an advantage at learning. Then the advance in technology will be associated with an increase in the demand for skill needed to implement it. Hence the skill premium will rise and income inequality will widen. In the early phases the new technologies may not be operated efficiently due to a dearth of experience. Productivity growth may appear to stall as the economy undertakes the (unmeasured) investment in knowledge needed to get the new technologies running closer to their full potential. The coincidence of rapid technological change, widening inequality, and a slowdown in productivity growth is not without precedence in economic history.

Greenwood and Yorukoglu go on to assess, in detail, how durable-goods prices, inequality, and productivity actually behaved in the first and second industrial revolutions. They conclude that game-changing technologies have, in history, been initially associated with falling capital prices, rising inequality, and falling productivity. Here is a representative chart, depicting the period (which was rich with technological advance) leading up to Gordon's (undeniably) golden age:

Mbchart130201
Source: "1974," Jeremy Greenwood and Mehmet Yorukoglu,
Carnegie-Rochester Conference Series on Public Policy, 46, 1997


Greenwood and Yorukoglu conclude their study with this pointed question:

Plunging prices for new technologies, a surge in wage inequality, and a slump in the advance of labor productivity - could all this be the hallmark of the dawn of an industrial revolution? Just as the steam engine shook 18th-century England, and electricity rattled 19th-century America, are information technologies now rocking the 20th-century economy?

I don't know (and nobody knows) if the dark-before-the-dawn possibility described by Greenwood and Yorukoglu is the apt analogy for where the U.S. (and global) economy sits today. (Update: Clark Nardinelli also discussed this notion.) But I will bet you there was some commentator writing in 1870 who sounded an awful lot like Professor Gordon.

Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed

February 1, 2013 in Economics, Productivity, Technology | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c834f53ef017d40ad15e4970c

Listed below are links to blogs that reference Half-Full Glasses:

Comments

Thank you. It is always much easier to see what can go wrong than to see what can go right.

Posted by: Douglas Lee | February 02, 2013 at 11:30 AM

Dr. Altig. A great post. Thanks for the historical reference which will be useful in my research. I just strongly disagree with Gordon. Betting against a new technological wave with a very large positive supply (and demand)shock is fool's gold. My personal bet is a truly radical breakthrough in energy.

Posted by: Steve Bannister | February 02, 2013 at 02:26 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


Archives


Categories


Powered by TypePad