The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.
- BLS Handbook of Methods
- Bureau of Economic Analysis
- Bureau of Labor Statistics
- Congressional Budget Office
- Economic Data - FRED® II, St. Louis Fed
- Office of Management and Budget
- Statistics: Releases and Historical Data, Board of Governors
- U.S. Census Bureau Economic Programs
- White House Economic Statistics Briefing Room
January 04, 2018
Financial Regulation: Fit for New Technologies?
In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.
Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.
As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.
Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.
This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.
John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.
On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.
Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.
Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.
It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.
January 03, 2018
Is Macroprudential Supervision Ready for the Future?
Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.
In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.
Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.
The question of how to determine when to deploy a macroprudential tool is the subject of a paper by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.
Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.
Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.
The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.
Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper he coauthored looking at the recent effectiveness of two such tools in Ireland.
In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.
Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.
August 11, 2016
Forecasting Loan Losses for Stress Tests
Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.
The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.
The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.
To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.
Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.
With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.
My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.
One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.
Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.
Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.
Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.
April 13, 2016
Putting the MetLife Decision into an Economic Context
In a recently released decision, a U.S. district court has ruled that the Financial Stability Oversight Council's (FSOC's) decision to designate MetLife as a potential threat to financial stability was "arbitrary and capricious" and rescinded that designation. This decision raises many questions, among them:
- Why did MetLife sue to end its status as a too-big-to-fail (TBTF) firm?
- How will this decision affect the Federal Reserve's regulation of nonbank financial firms?
- What else can be done to reduce the risk of crisis arising from nonbank financial firms?
Why does MetLife want to end its TBTF status?
An often-expressed concern is that market participants will consider FSOC-designated firms too big to fail, and investors will accord these firms lower risk premiums (see, for example, Peter J. Wallison). The result is that FSOC-designated firms will gain a competitive advantage. If so, why did MetLife sue to have the designation rescinded? And why did the announcement of the court's determination result in an immediate 5 percent increase in the MetLife's stock price?
One possible explanation is that the FSOC's designation guarantees the firm will be subject to higher regulatory costs, but it only marginally changes the likelihood it would receive a government bailout. The Dodd-Frank Act (DFA) requires that FSOC-designated firms be subject to consolidated prudential supervision by the Federal Reserve using standards that are more stringent than the requirements for other nonbank financial firms.
Moreover, the argument that such designation automatically conveys a competitive advantage has at least two weaknesses. First, although Title II of the DFA authorizes the Federal Deposit Insurance Corporation (FDIC) to resolve a failing nonbank firm in certain circumstances, DFA does not provide FDIC insurance for any of the nonbank firm's liabilities, nor does it provide the FDIC with funds to undertake a bailout. The FDIC is supposed to recover its costs from the failed firm's assets. Admittedly, DFA does allow for the possibility that the FDIC would need to assess other designated firms for part of the cost of a resolution. However, MetLife could as easily have been assessed to pay for another firm as it could have been the beneficiary of assessments on other systemically important firms.
A second potential weakness in the competitive advantage argument is that the U.S. Treasury Secretary decides to invoke FDIC resolution only after receiving a recommendation from the Federal Reserve Board and one other federal financial regulatory agency (depending upon the type of failing firm). Invocation of resolution is not automatic. Moreover, a part of any decision authorizing FDIC resolution are findings that at the time of authorization:
- the firm is in default or in danger of default,
- resolution under other applicable law (bankruptcy statutes) would have "serious adverse consequences" on financial stability, and
- those adverse effects could be avoided or mitigated by FDIC resolution.
Although it would seem logical that FSOC-designated firms are more likely to satisfy these criteria than other financial firms, the Title II criteria for FDIC resolution are the same for both types of firms.
How does this affect the Fed's regulation of nonbank firms?
Secretary of the Treasury Jack Lew has indicated his strong disagreement with the district court's decision, and the U.S. Treasury has said it will appeal. Suppose, however, that FSOC designation ultimately does become far more difficult. How significantly would that affect the Federal Reserve's regulatory power over nonbank financial firms?
Although the obvious answer would be that it would greatly reduce the Fed's regulatory power, recent experience casts some doubt on this view. Nonbank financial firms appear to regard FSOC designation as imposing costly burdens that substantially exceed any benefits they receive. Indeed, GE Capital viewed the costs as so significant that it had been selling large parts of its operations and recently petitioned the FSOC to rescind its designation. Unless systemically important activities are a core part of the firm's business model, nonbank financial firms may decide to avoid undertaking activities that would risk FSOC designation.
Thus, a plausible set of future scenarios is that the Federal Reserve would be supervising few, if any, nonbank financial firms regardless of the result of the MetLife case. Rather, ultimate resolution of the case may have more of an impact on whether large nonbank financial firms conduct systemically important activities (if designation becomes much harder) or the activities are conducted by some combination of smaller nonbank financial firms and by banks that are already subject to Fed regulation (if the ruling does not prevent future designations).
Regardless of how the courts and the FSOC respond to this recent judicial decision, the financial crisis should have taught us valuable lessons about the importance of the nonbank financial sector to financial stability. However, those lessons should go beyond merely the need to impose prudential supervision on any firms that are systemically important.
The cause of the financial crisis was not the failure of one or two large nonbank financial firms. Rather, the cause was that almost the entire financial system stood on the brink of collapse because almost all the major participants were heavily exposed to the weak credit standards that were pervasive in the residential real estate business. Yet if the real problem was the risk of multiple failures as a result of correlated exposures to a single large market, perhaps we ought to invest more effort in evaluating the riskiness of markets that could have systemic consequences.
In an article in Notes from the Vault and other forums, I have called for systematic end-to-end reviews of major financial markets starting with the origination of the risks and ending with the ultimate holder(s) of the risks. This analysis would involve both quantitative analysis of risk measures and qualitative analysis of the safeguards designed to reduce risk.
The primary goal would be to identify and try to correct weaknesses in the markets. A secondary goal would be to give the authorities a better sense of where problems are likely to arise if a market does encounter problems.
April 11, 2016
The Rise of Shadow Banking in China
China's banking system has suffered significant losses over the past two years, which has raised concerns about the health of China's financial industry. Such losses are perhaps not all that surprising. Commercial banks have been increasing their risk-taking activities in the form of shadow lending. See, for example, here, here, and here for some discussion of the evolution of China's shadow banking system.
The increase in risk taking by banks has occurred despite a rapid decline in money growth since 2009 and the People's Bank of China's efforts to limit credit expansions to real estate and other industries that appear to be over capacity.
One area of expanded activity has been investment in asset-backed "securities" by China's large non-state banks. This investment has created potentially significant risks to the balance sheets of these institutions (see the charts below). Using the micro-transaction-based data on shadow entrusted loans, Chen, Ren, and Zha (2016) have provided theoretical and empirical insights into this important issue (see also this Vox article that summarizes the paper).
Recent regulatory reforms in China have taken a positive step to try to limit such risk-taking behavior, although the success of these efforts remains to be seen. An even more challenging task lies ahead for designing a comprehensive and sustainable macroprudential framework to support the healthy functioning of China's traditional and shadow banking industries.
May 13, 2010
Regulatory reform via resolution: Maybe not sufficient, certainly necessary
This macroblog post is the first of several that will feature the Atlanta Fed's 2010 Financial Markets Conference. Please return for additional information.
On Tuesday and Wednesday the Federal Reserve Bank of Atlanta hosted its annual Financial Markets Conference, titled this year Up From the Ashes: The Financial System After the Crisis. Much of the first day was devoted to conversations about rating agencies and their role in the economy, for better and worse. The second day was absorbed by the issues of too-big-to-fail, macroprudential regulation, and regulatory reform.
One theme that ran throughout the second day's conversations related to the two aspects of regulatory reform highlighted by Chairman Bernanke in his recent congressional testimony on lessons from the failure of Lehman Brothers:
"The Lehman failure provides at least two important lessons. First, we must eliminate the gaps in our financial regulatory framework that allow large, complex, interconnected firms like Lehman to operate without robust consolidated supervision… Second, to avoid having to choose in the future between bailing out a failing, systemically critical firm or allowing its disorderly bankruptcy, we need a new resolution regime, analogous to that already established for failing banks."
Though those two aspects of reform are in no way mutually exclusive, there is, I think, a tendency to lean to one or the other as the first most important contributor to avoiding a repeat of our recent travails. To put it in slightly different terms, there are those that would place greatest emphasis on reducing the probability of systemically important failures and those that would put greatest emphasis on containing the damage when a systemically important failure occurs.
"…the best chance for durable reform is to start with the assumption that failure will happen and construct a strategy for dealing with it when it does…
"In a world with the capacity for rapid innovation, rule-writers have a tendency to perpetually fight the last war…
"I am not arguing that … the 'Volcker rule,' derivative exchanges, trading restrictions, or any of the specific regulatory reform proposals in play are necessarily bad ideas. I am arguing that we should assume that, no matter what proposed safeguards are put in place, failure of some systemically important institution will ultimately occur—somewhere, somehow. And that means priority has to be given to the development of resolution procedures for institutions that are otherwise too big to fail."
At our conference this week, University of Florida professor Mark Flannery expressed concerns that, placed in an international context, a truly robust resolution process for failed institutions may be tough to construct:
"In principle, a non-bankruptcy reorganization channel for SIFIs [systemically important financial institutions] makes a lot of sense. But the complexity of SIFIs' organizational structures introduces some serious problems. Not only do SIFIs operate with a bewildering array of subsidiaries… but they generally operate in many countries. Without very close coordination of resolution decisions across jurisdictions, a U.S. government reorganization would likely set off a scramble for assets of the sort that bankruptcy is meant to avoid. Rapid asset sales could generate downward price spirals… with systemically detrimental effects. Second, supervisors would have to assure that SIFIs maintain the proper sort and quantity of haircut-able liabilities outstanding. Once a firm has been identified as systemically important, this may be a relatively straightforward requirement to impose, but there remains the danger that 'shadow' institutions will become systemically important, before they are properly regulated. (This is not a danger unique to the question of resolution.)
"I conclude that the international coordination required to make prompt resolution feasible for SIFIs is a long way off, if it can be achieved at all."
Not an encouraging note, and the point is very well taken. Flannery concludes that we would be better served by focusing on changes that lie on the "avoiding failure" end of the reform spectrum: standardized derivative contracts, tying supervisory oversight to objective market-based metrics on the health of SIFIs, limitations on risky activities, and higher capital standards.
As I noted above, I am certainly not hostile to these ideas, and the answer to the question "should reform strategies be rules-based or resolution-based?" is surely "all of the above." But even if it will take a long time to develop better resolution procedures to address the types of problems that emerged in the past several years, I strongly argue that development of such procedures are necessary for the long-term, and work on these procedures should begin. And here, I have a relatively modest proposal, returning to my remarks:
"…there is a pretty obvious way to vet proposals that are offered. We have a couple of real-world case studies—Bear Stearns, Lehman, AIG. One test for any proposed resolution process would be to illustrate how that plan would have been implemented in each of those cases. This set of experiments can't be started too soon, and I think should move it to the top of our reform priorities."
Whether it be the specific provisions of reform bills winding their way through Congress or the "living will" idea championed this week by the Federal Deposit Insurance Corporation, I think we would do well to let the stress testing of those proposals begin.
By Dave Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Regulatory reform via resolution: Maybe not sufficient, certainly necessary:
April 06, 2010
Breaking up big banks: As usual, benefits come with a side of costs
Probably the least controversial proposition among an otherwise very controversial set of propositions on which financial reform proposals are based is that institutions deemed "too big to fail" (TBTF) are a real problem. As Fed Chairman Bernanke declared not too long ago:
As the crisis has shown, one of the greatest threats to the diversity and efficiency of our financial system is the pernicious problem of financial institutions that are deemed "too big to fail."
The next question, of course, is how to deal with that threat. At this point the debate gets contentious. One popular suggestion for dealing with the TBTF problem is to just make sure that no bank is "too big." Two scholars leading that charge are Simon Johnson and
James Kwak (who are among other things the proprietors at The Baseline Scenario blog). They make their case in the New York Times' Economix feature:
Since last fall, many leading central bankers including Mervyn King, Paul Volcker, Richard Fisher and Thomas Hoenig have come out in favor of either breaking up large banks or constraining their activities in ways that reduce taxpayers' exposure to potential failures. Senators Bernard Sanders and Ted Kaufman have also called for cutting large banks down to a size where they no longer pose a systemic threat to the financial system and the economy.
…We think that increased capital requirements are an important and valuable step toward ensuring a safer financial system. We just don't think they are enough. Nor are they the central issue…
We think the better solution is the "dumber" one: avoid having banks that are too big (or too complex) to fail in the first place.
Paul Krugman has noted one big potential problem with this line of attack:
As I argued in my last column, while the problem of "too big to fail" has gotten most of the attention—and while big banks deserve all the opprobrium they're getting—the core problem with our financial system isn't the size of the largest financial institutions. It is, instead, the fact that the current system doesn't limit risky behavior by "shadow banks," institutions—like Lehman Brothers—that carry out banking functions, that are perfectly capable of creating a banking crisis, but, because they issue debt rather than taking deposits, face minimal oversight.
In addition to that observation—which is the basis of calls for a systemic regulator that spans the financial system, and not just specific classes of financial institutions—there is another, very basic, economic question: Why are banks big?
To that question, there seems to be an answer: We have big banks because there are efficiencies associated with getting bigger—economies of scale. David Wheelock and Paul Wilson, of the Federal Reserve Bank of St. Louis and Clemson University, respectively, sum up what they and other economists know about economies of scale in banking:
…our findings are consistent with other recent studies that find evidence of significant scale economies for large bank holding companies, as well as with the view that industry consolidation has been driven, at least in part, by scale economies. Further, our results have implications for policies intended to limit the size of banks to ensure competitive markets, to reduce the number of banks deemed "too-big-to-fail," or for other purposes. Although there may be benefits to imposing limits on the size of banks, our research points out potential costs of such intervention.
Writing at the National Review Online, the Cato Institute's Arnold Kling acknowledges the efficiency angle, and then dismisses it:
There's a long debate to be had about the maximum size to which a bank should be allowed to grow, and about how to go about breaking up banks that become too large. But I want to focus instead on the general objections to large banks.
The question can be examined from three perspectives. First, how much economic efficiency would be sacrificed by limiting the size of financial institutions? Second, how would such a policy affect systemic risk? Third, what would be the political economy of limiting banks' size?
It is the political economy that most concerns me…
If we had a free market in banking, very large banks would constitute evidence that there are commensurate economies of scale in the industry. But the reality is that our present large financial institutions probably owe their scale more to government policy than to economic advantages associated with their vast size.
I added the emphasis to the "probably" qualifier.
The Wheelock-Wilson evidence does not disprove the Kling assertion, as the estimates of scale economies are obtained using banks' cost structures, which certainly are impacted by the nature of government policy. But if economies of scale are in some way intrinsic to at least some aspects of banking—and not just political economy artifacts—the costs of placing restrictions on bank size could introduce risks that go beyond reducing the efficiency of the targeted financial institutions. If some banks are large for good economic reasons, the forces that move them to become big would likely emerge with force in the shadow banking system, exacerbating the very problem noted by Krugman.
I think it bears noting that the argument for something like constraining the size of particular banks implicitly assumes that it is not possible, for reasons that are either technical or political, to actually let failing large institutions fail. Maybe it is so, as Robert Reich asserts in a Huffington Post item today. And maybe it is in fact the case that big is not beautiful when it comes to financial institutions. But in evaluating the benefits of busting up the big guys, we shouldn't lose sight of the possibility that this is also a strategy that could carry very real costs.
By Dave Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Breaking up big banks: As usual, benefits come with a side of costs:
March 05, 2010
In the beginning, there was a lender of last resort
Steven Pearlstein, business columnist for the Washington Post, asks and answers the question "should the Fed stay out of the bank supervision business?"
"As the Senate begins to focus on how to fix financial regulation, one of the remaining unresolved issues is what role the Federal Reserve should have in supervising banks.
"The correct answer? None at all."
One of the centerpieces of the Pearlstein argument is this:
"The reality is that the Fed's primary focus is and will always be on monetary policy. Bank supervision will continue, as it has been, as a secondary activity that not only receives less attention from the top but will be sacrificed at those rare but crucial moments when the two missions might conflict. Indeed, by arguing that the Fed needs the insights gleaned from bank supervision to be more effective in making monetary policy, the Fed essentially acknowledges this hierarchy in its priorities. Bank supervision is important enough that it ought to be somebody else's top priority."
If you might allow me a moment of personal indulgence, there was a time when I had some sympathy with the sentiment that the "Fed's primary focus is and always will be on monetary policy." I, of course, knew the story of the creation of the Fed, motivated by the need to provide an elastic currency to avoid disruptive fluctuations in prices and a lender of last resort to stop liquidity stress from becoming a full-blown financial crisis. But that was a story from the past. The modern world began in 1935 with the statutory creation of the Federal Open Market Committee, which would eventually evolve, with its central bank brethren in the rest of the world, into the institution described by Pearlstein as being primarily focused on monetary policy.
I felt that way until Sept. 11, 2001. On an average day in the week ending Sept. 5 of that year, the Federal Reserve extended $21 million in discount loans to banks, a reasonably representative volume. On Sept. 12, discount loans amounted to over $45 billion. As a result, the U.S. financial system did not collapse.
The horrible circumstances of 9/11 have been thankfully unique, but there is a case to be made for the proposition that the most important role of the central bank in the recent financial crisis was not in the realm of traditional monetary policy but in the exercise of variations on the lender-of-last-resort function. In fact, in times of acute financial stress, this role must always be so. Witness this remark by Alan Greenspan on Oct. 20, 1987:
"… in a crisis environment, I suspect we shouldn't really focus on longer-term policy questions until we get beyond this immediate period of chaos."
Which brings us to the question of the Fed's role in bank supervision. More precisely, it brings us to comments from Atlanta Fed President Dennis Lockhart, who delivered remarks on Wednesday to the New York Association for Business Economics:
"… the Fed must play a central role in a defense structure designed to prevent or manage future crises. My key argument is the indivisibility of monetary authority, the lender-of-last-resort role, and a substantial direct role in bank supervision. Only the Fed can act as lender of last resort because only the monetary authority can print money in an emergency. To make sound decisions, the lender of last resort needs intimate hard and qualitative knowledge of individual financial institutions, their connectedness to counterparties, and the capacity of management.
"There is sentiment in Washington that would separate these tightly linked functions that are so critical in responding to a financial crisis. Removing the central bank from a supervision role designed to provide totally current, firsthand knowledge and information will weaken defenses against recurrence of financial instability. Flawed defenses could be calamitous in a future we cannot see."
If this advice goes unheeded, I fear we might discover its wisdom in the worst possible circumstances.By Dave Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference In the beginning, there was a lender of last resort:
February 19, 2010
Should the Fed stay in regulation?
One of the central issues in the postcrisis effort to reform our regulatory infrastructure is who should do the regulating. The answer to some in Congress is none of the above:
"… under consideration is a consolidated bank regulator, one aide [to Alabama Senator Richard Shelby] said. The idea is supported by [Connecticut Senator Christopher] Dodd, who proposed eliminating the Office of Thrift Supervision and Office of the Comptroller of the Currency, and moving their powers, along with the bank-supervision powers of the Federal Reserve and the Federal Deposit Insurance Corp., to the new agency.
"Negotiators are still deciding how to monitor firms for systemic risk, including how to define and measure it, what authorities to give a regulator and which agency is best suited to get the power, a Shelby aide said."
As reported in The New York Times:
"The Senate and the Obama administration are nearing agreement on forming a council of regulators, led by the Treasury secretary, to identify systemic risk to the nation's financial system, officials said Wednesday…"
Though the idea of a council to provide regulatory and supervisory oversight is still contentious (the Times article offers multiple opinions from Federal Reserve officials) the formation of a council is not necessarily the same thing as removing the central bank from boots-on-the-ground, or operational, supervisory responsibility. In other words, there is still the question of how to monitor systemic risk and which agency is best suited to get the power.
Earlier this week I made note of a new International Monetary Fund (IMF) paper by Olivier Blanchard, Giovanni Dell'Aricca, and Paulo Mauro, taking some issue with the proposal that central banks consider raising their long-run inflation objectives. Though that part of the paper seemed to attract almost all of the attention in the media and blogosphere, the discussion in the IMF article expanded well beyond that inflation target issue. Included among the many proposals of Blanchard et al. was this, on systemic risk regulation and the role of the central bank:
"If one accepts the notion that, together, monetary policy and regulation provide a large set of cyclical tools, this raises the issue of how coordination is achieved between the monetary and the regulatory authorities, or whether the central bank should be in charge of both.
"The increasing trend toward separation of the two may well have to be reversed. Central banks are an obvious candidate as macroprudential regulators. They are ideally positioned to monitor macroeconomic developments, and in several countries they already regulate the banks. 'Communication' debacles during the crisis (for example on the occasion of the bailout of Northern Rock) point to the problems involved in coordinating the actions of two separate agencies. And the potential implications of monetary policy decisions for leverage and risk taking also favor the centralization of macroprudential responsibilities within the central bank."
Consistent with the even-handedness of the Blanchard et al. paper, the authors did not come to this conclusion without noting the legitimate issues of those who would separate regulatory authority from the central bank:
"Against this solution, two arguments were given in the past against giving such power to the central bank. The first was that the central bank would take a 'softer' stance against inflation, since interest rate hikes may have a detrimental effect on bank balance sheets. The second was that the central bank would have a more complex mandate, and thus be less easily accountable. Both arguments have merit and, at a minimum, imply a need for further transparency if the central bank is given responsibility for regulation."
But, they conclude:
I wonder, then: Would a regulatory council of which the Federal Reserve is a member, combined with operational supervisory responsibilities housed within the central bank, be a tolerably good response to Blanchard's and his colleagues' admonitions?
"The alternative, that is, separate monetary and regulatory authorities, seems worse."
By Dave Altig, senior vice president and director of research at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Should the Fed stay in regulation?:
December 23, 2009
Change the bathwater, keep the baby
What have we learned from the experience of the last two years? The Wall Street Journal offers up one discouraging conclusion:
"For much of the past century, America has served as the global model for the power of free markets to generate prosperity…
"In the 2000s, though, the U.S. quickly went from being the beacon of capitalism to a showcase for some of its flaws…
"But one thing is certain: America's success or failure over the next decade will go a long way toward defining what the world's next economic model will be."
One of the article's implied alternatives for the world's next economic model seems a bit of a stretch:
"The troubles in the U.S. stand in sharp contrast to the relative success of other countries, notably China. With a system that is at best quasi-capitalist, China's economic output per person grew an inflation-adjusted 141% over the decade, and hardly paused for the global crisis, according to estimates from the International Monetary Fund. That compares with 9% growth in the U.S. over the same period."
Let's put that comparison to rest right away:
The theory of economic growth is rich, interesting, and somewhat unsettled, but it stands to reason that emerging economies, where the fruit hangs low, can for a time grow much faster than advanced, fully developed countries. Furthermore, I find it reasonable to assume that, contrary to representing an alternative economic model, the Chinese experience over the past decade is itself evidence that even incomplete movements in the direction of free markets can pay large dividends. But even if you doubt that interpretation, the gap between the material circumstances of the average American and Chinese citizen is so large as to make comparisons about the success of the respective economic models premature by several decades.
In fact, the picture above nicely illustrates what I believe is a more on-the-mark observation in the WSJ article:
"At least twice in the past century, the U.S. has re-emerged from deep crises to reinvent capitalism. In the 1930s, the Depression compelled Franklin Roosevelt to introduce Social Security, deposit insurance and the Securities and Exchange Commission.
"After the brutal stagflation of the 1970s and early 1980s, then-Federal Reserve Chairman Paul Volcker demonstrated the ability of an independent central bank to get prices under control, ushering in an age in which powerful, largely autonomous central banks became the norm throughout the developed world."
So what, then, is the alternative model waiting in the wings to replace the current one? It's not given a name, but the features are clear in the article:
"Policy makers' focus now, though, is on the financial sector that failed so spectacularly. Progress has been slow, and key pieces are missing, but the contours of a new system are taking shape. Banks will face stricter limits on their use of borrowed money, or 'leverage,' to boost returns. The Fed will keep a closer eye on markets during booms, and possibly step in to curb excessive risk-taking—a U-turn from its previous policy of mopping up after bubbles burst.
"Such changes would amount to a grand bargain: Give up some of the growth and dynamism of the U.S. economy for a safer, more equitable brand of capitalism—one that could avoid the kind of busts that turned the 2000s into such a disaster."
OK, but here is the central question: How can we be sure that the "new system" will be an improvement on the one it replaces? Some of the most significant failures of the last couple of years occurred in highly regulated industries. So the absence of regulation is not really at issue, but rather what kind of regulation we will have, and how it will be implemented. And there is the obvious point that regulatory change is not really reform if it undermines a system's existing strength. Some of the reform proposals on the table, for example, have the potential to seriously compromise "the ability of an independent central bank to get prices under control," the very feature of our current system that the article identifies as an historical source of resilience.
I worry about a regulatory change that commences from the proposition that we must "give up some of the growth and dynamism of the U.S. economy for a safer, more equitable brand of capitalism." In their introduction to a comprehensive set of reform proposals from New York University's Stern School of Business, professors Viral Acharya and Matthew Richardson have this to say:
"There are many cracks in the financial system, some of which we now know, others no doubt we will discover down the road.… A common theme of our proposals notes that fixing all the cracks will shore up the financial house but at great cost. Instead, by fixing a few major ones, the foundation can be stabilized, the financial structure rebuilt, and innovation and markets can once again flourish."
One of those major cracks is the "too-big-to-fail" distortion. Is it important to remember that too-big-to-fail is itself a creation of regulation, not markets? I think so.
By David Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
- Does Loyalty Pay Off?
- Immigration and Hispanics' Educational Attainment
- Are Tariff Worries Cutting into Business Investment?
- Improving Labor Market Fortunes for Workers with the Least Schooling
- Part-Time Workers Are Less Likely to Get a Pay Raise
- Learning about an ML-Driven Economy
- Hitting a Cyclical High: The Wage Growth Premium from Changing Jobs
- Thoughts on a Long-Run Monetary Policy Framework, Part 4: Flexible Price-Level Targeting in the Big Picture
- Thoughts on a Long-Run Monetary Policy Framework, Part 3: An Example of Flexible Price-Level Targeting
- Thoughts on a Long-Run Monetary Policy Framework, Part 2: The Principle of Bounded Nominal Uncertainty
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- November 2017
- October 2017
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Economic conditions
- Economic Growth and Development
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Health Care
- Inflation Expectations
- Interest Rates
- Labor Markets
- Latin America/South America
- Monetary Policy
- Money Markets
- Real Estate
- Saving, Capital, and Investment
- Small Business
- Social Security
- This, That, and the Other
- Trade Deficit
- Wage Growth