The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.
- BLS Handbook of Methods
- Bureau of Economic Analysis
- Bureau of Labor Statistics
- Congressional Budget Office
- Economic Data - FRED® II, St. Louis Fed
- Office of Management and Budget
- Statistics: Releases and Historical Data, Board of Governors
- U.S. Census Bureau Economic Programs
- White House Economic Statistics Briefing Room
January 04, 2018
Financial Regulation: Fit for New Technologies?
In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.
Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.
As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.
Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.
This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.
John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.
On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.
Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.
Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.
It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.
January 03, 2018
Is Macroprudential Supervision Ready for the Future?
Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.
In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.
Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.
The question of how to determine when to deploy a macroprudential tool is the subject of a paper by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.
Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.
Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.
The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.
Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper he coauthored looking at the recent effectiveness of two such tools in Ireland.
In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.
Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.
May 11, 2017
Are Small Loans Hard to Find? Evidence from the Federal Reserve Banks' Small Business Survey
The Federal Reserve Banks recently released results from the nationwide 2016 Small Business Survey, which asks firms with 500 or fewer employees about business and financing conditions. One key finding is just how small the financing needs of many businesses are. One-fifth of small businesses that applied for financing in the prior 12 months were seeking $25,000 or less. A further 35 percent were seeking between $25,001 and $100,000.
The data also show that firms seeking relatively small amounts of financing (up to $100,000) receive a significantly smaller fraction of their funding than firms who applied for more than $250,000. Chart 1 shows the weighted average of the share of financing received by the amount the firm was seeking.
So what explains this variation in financing attainment across the amount requested? We've heard reports from small business owners that smaller loans are relatively more difficult to obtain, especially from traditional banks. One often-cited rationale is that the administrative burden associated with originating and managing a small loan is often just not worth the bank's time. However, this notion is not entirely consistent with data on the current holdings of small business loans on the balance sheets of banks. As of June 2015, loans of less than $100,000 made up about 92 percent of the number of business loans under $1 million.
So it seems originating a loan for less than $100,000 is not uncommon for a bank after all. So why, then, do business owners say that smaller loans are more difficult to get? Using data from the 2016 Small Business Survey, we can investigate the reason for this apparent disconnect.
Much can be explained by looking at the characteristics of those who borrow small amounts versus large amounts. Firms seeking $25,000 or less are more likely to be high credit risk and younger, have fewer employees, and have smaller revenues than firms applying for more than $250,000. The table below summarizes the differences:
Of particular importance is the credit risk associated with the firm. Controlling for differences in this factor, it turns out that smaller amounts of financing are not more difficult to obtain. Charts 2 and 3 show the weighted average share of financing received by amount sought for low credit risk firms and for middle to high credit risk firms separately.
As charts 2 and 3 demonstrate, low credit risk firms are able to obtain a similar share of the amount requested, regardless of how much they applied for. The same is true for higher risk firms. We also see that medium and high risk firms get less of their financing needs met than low credit risk firms that apply for similar amounts.
From this evidence, it seems that credit approval has more to do with the attributes of the firm than the amount of financing for which the firm applied. These results also highlight the potential importance of alternatives to traditional bank financing so that riskier entrepreneurs—including important contributors to the dynamism of the economy such as startups—have somewhere to turn. A later macroblog post will explore how low and high credit risk firms use financing differently, including where they apply and where they receive funding.
September 08, 2016
Introducing the Atlanta Fed's Taylor Rule Utility
Simplicity isn't always a virtue, but when it comes to complex decision-making processes—for example, a central bank setting a policy rate—having simple benchmarks is often helpful. As students and observers of monetary policy well know, the common currency in the central banking world is the so-called "Taylor rule."
The Taylor rule is an equation introduced by John Taylor in a seminal 1993 paper that prescribes a value for the federal funds rate—the interest rate targeted by the Federal Open Market Committee (FOMC)—based on readings of inflation and the output gap. The output gap measures the percentage point difference between real gross domestic product (GDP) and an estimate of its trend or potential.
Since 1993, academics and policymakers have introduced and used many alternative versions of the rule. The alternative forms of the rule can supply policy prescriptions that differ significantly from Taylor's original rule, as the following chart illustrates.
The green line shows the policy prescription from a rule identical to the one in Taylor's paper, apart from some minor changes in the inflation and output gap measures. The red line uses an alternative and commonly used rule that gives the output gap twice the weight used for the Taylor (1993) rule, derived from a 1999 paper by John Taylor. The red line also replaces the 2 percent value used in Taylor's 1993 paper with an estimate of the natural real interest rate, called r*, from a paper by Thomas Laubach, the Federal Reserve Board's director of monetary affairs, and John Williams, president of the San Francisco Fed. Federal Reserve Chair Janet Yellen also considered this alternative estimate of r* in a 2015 speech.
Both rules use real-time data. The Taylor (1993) rule prescribed liftoff for the federal funds rate materially above the FOMC's 0 to 0.25 percent target range from December 2008 to December 2015 as early as 2012. The alternative rule did not prescribe a positive fed funds rate since the end of the 2007–09 recession until this quarter. The third-quarter prescriptions incorporate nowcasts constructed as described here. Neither the nowcasts nor the Taylor rule prescriptions themselves necessarily reflect the outlook or views of the Federal Reserve Bank of Atlanta or its president.
Additional variables that get plugged into this simple policy rule can influence the rate prescription. To help you sort through the most common variations, we at the Atlanta Fed have created a Taylor Rule Utility. Our Taylor Rule Utility gives you a number of choices for the inflation measure, inflation target, the natural real interest rate, and the resource gap. Besides the Congressional Budget Office–based output gap, alternative resource gap choices include those based on a U-6 labor underutilization gap and the ZPOP ratio. The latter ratio, which Atlanta Fed President Dennis Lockhart mentioned in a November 2015 speech while addressing the Taylor rule, gauges underemployment by measuring the share of the civilian population working their desired number of hours.
Many of the indicator choices use real-time data. The utility also allows you to establish your own weight for the resource gap and whether you want the prescription to put any weight on the previous quarter's federal funds rate. The default choices of the Taylor Rule Utility coincide with the Taylor (1993) rule shown in the above chart. Other organizations have their own versions of the Taylor Rule Utility (one of the nicer ones is available on the Cleveland Fed's Simple Monetary Policy Rules web page). You can find more information about the Cleveland Fed's web page on the Frequently Asked Questions page.
Although the Taylor rule and its alternative versions are only simple benchmarks, they can be useful tools for evaluating the importance of particular indicators. For example, we see that the difference in the prescriptions of the two rules plotted above has narrowed in recent years as slack has diminished. Even if the output gap were completely closed, however, the current prescriptions of the rules would differ by nearly 2 percentage points because of the use of different measures of r*. We hope you find the Taylor Rule Utility a useful tool to provide insight into issues like these. We plan on adding further enhancements to the utility in the near future and welcome any comments or suggestions for improvements.
August 11, 2016
Forecasting Loan Losses for Stress Tests
Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.
The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.
The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.
To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.
Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.
With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.
My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.
One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.
Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.
Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.
Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.
June 06, 2016
After the Conference, Another Look at Liquidity
When it comes to assessing the impact of central bank asset purchase programs (often called quantitative easing or QE), economists tend to focus their attention on the potential effects on the real economy and inflation. After all, the Federal Reserve's dual mandate for monetary policy is price stability and full employment. But there is another aspect of QE that may also be quite important in assessing its usefulness as a policy tool: the potential effect of asset purchases on financial markets through the collateral channel.
Asset purchase programs involve central bank purchases of large quantities of high-quality, highly liquid assets. Postcrisis, the Fed has purchased more than $3 trillion of U.S. Treasury securities and agency mortgage-backed securities, the European Central Bank (ECB) has purchased roughly 727 billion euros' worth of public-sector bonds (issued by central governments and agencies), and the Bank of Japan is maintaining an annual purchase target of 80 trillion yen. These bonds are not merely assets held by investors to realize a return; they are also securities highly valued for their use as collateral in financial transactions. The Atlanta Fed's 21st annual Financial Markets Conference explored the potential consequences of these asset purchase programs in the context of financial market liquidity.
The collateral channel effect focuses on the role that these low-risk securities play in the plumbing of U.S. financial markets. Financial firms fund a large fraction of their securities holdings in the repurchase (or repo) markets. Repurchase agreements are legally structured as the sale of a security with a promise to repurchase the security at a fixed price at a given point in the future. The economics of this transaction are essentially similar to those of a collateralized loan.
The sold and repurchased securities are often termed "pledged collateral." In these transactions, which are typically overnight, the lender will ordinarily lend cash equal to only a fraction of the securities value, with the remaining unfunded part called the "haircut." The size of the haircut is inversely related to the safety and liquidity of the security, with Treasury securities requiring the smallest haircuts. When the securities are repurchased the following day, the borrower will pay back the initial cash plus an additional amount known as the repo rate. The repo rate is essentially an overnight interest rate paid on a collateralized loan.
Central bank purchases of Treasury securities may have a multiplicative effect on the potential efficiency of the repo market because these securities are often used in a chain of transactions before reaching a final holder for the evening. Here's a great diagram presented by Phil Prince of Pine River Capital Management illustrating the role that bonds and U.S. Treasuries play in facilitating a variety of transactions. In this example, the UST (U.S. Treasury) securities are first used as collateral in an exchange between the UST securities lender and the globally systemically important financial institution (GSIFI bank/broker dealer), then between the GSIFI bank and the cash provider, a money market mutual fund (MMMF), corporation, or sovereign wealth fund (SWF). The reuse of the UST collateral reduces the funding cost of the GSIFI bank and, hence, the cost to the levered investor/hedge fund who is trying to exploit discrepancies in the pricing of a corporate bond and stock.
Just how important or large is this pool of reusable collateral? Manmohan Singh of the International Monetary Fund presented the following charts, depicting the pledged collateral at major U.S. and European financial institutions that can be reused in other transactions.
So how do central bank purchases of high-quality, liquid assets affect the repo market—and why should macroeconomists care? In his presentation, Marvin Goodfriend of Carnegie Mellon University concluded that central bank asset purchases, which he terms "pure monetary policy," lower short-term interest rates (especially bank-to-bank lending) but increase the cost of funding illiquid assets through the repo market. And Singh noted that repo rates are an important part of the constellation of short-term interest rates and directly link overnight markets with the longer-term collateral being pledged. Thus, the interaction between a central bank's interest-rate policy and its balance sheet policy is an important aspect of the transmission of monetary policy to longer-term interest rates and real economic activity.
Ulrich Bindseil, director of general market operations at the ECB, discussed a variety of ways in which central bank actions may affect, or be affected by, bond market liquidity. One way that central banks may mitigate any adverse impact on market liquidity is through their securities lending programs, according to Bindseil. Central banks use such programs to lend particular bonds back out to the market to "provide a secondary and temporary source of securities to the financing market...to promote smooth clearing of Treasury and Agency securities."
On June 2, for example, the New York Fed lent $17.8 billion of UST securities from the Fed's portfolio. These operations are structured as collateral swaps—dealers pledge other U.S. Treasury bonds as collateral with the Fed. During the financial crisis, the Federal Reserve used an expanded version of its securities lending program called the Term Securities Lending Facility to allow firms to replace lower-quality collateral that was difficult to use in repo transactions with Treasury securities.
Finally, the Fed currently releases some bonds to the market each day in return for cash, through its overnight reverse repo operations, a supplementary facility used to support control of the federal funds rate as the Federal Open Market Committee proceeds with normalization. However, this release has an important limitation: these operations are conducted in the triparty repo market, and the bonds released through these operations can be reused only within that market. In contrast, if the Fed were to sell its U.S. Treasuries, the securities could not only be used in the triparty repo market but also as collateral in other transactions including ones in the bilateral repo market (you can read more on these markets here). As long as central bank portfolios remain large and continue to grow as in Europe and Japan, policymakers are integrally linked to the financial plumbing at its most basic level.
To see a video of the full discussion of these issues as well as other conference presentations on bond market liquidity, market infrastructure, and the management of liquidity within financial institutions, please visit Getting a Grip on Liquidity: Markets, Institutions, and Central Banks. My colleague Larry Wall's conference takeaways on the elusive definition of liquidity, along with the impact of innovation and regulation on liquidity, are here.
April 13, 2016
Putting the MetLife Decision into an Economic Context
In a recently released decision, a U.S. district court has ruled that the Financial Stability Oversight Council's (FSOC's) decision to designate MetLife as a potential threat to financial stability was "arbitrary and capricious" and rescinded that designation. This decision raises many questions, among them:
- Why did MetLife sue to end its status as a too-big-to-fail (TBTF) firm?
- How will this decision affect the Federal Reserve's regulation of nonbank financial firms?
- What else can be done to reduce the risk of crisis arising from nonbank financial firms?
Why does MetLife want to end its TBTF status?
An often-expressed concern is that market participants will consider FSOC-designated firms too big to fail, and investors will accord these firms lower risk premiums (see, for example, Peter J. Wallison). The result is that FSOC-designated firms will gain a competitive advantage. If so, why did MetLife sue to have the designation rescinded? And why did the announcement of the court's determination result in an immediate 5 percent increase in the MetLife's stock price?
One possible explanation is that the FSOC's designation guarantees the firm will be subject to higher regulatory costs, but it only marginally changes the likelihood it would receive a government bailout. The Dodd-Frank Act (DFA) requires that FSOC-designated firms be subject to consolidated prudential supervision by the Federal Reserve using standards that are more stringent than the requirements for other nonbank financial firms.
Moreover, the argument that such designation automatically conveys a competitive advantage has at least two weaknesses. First, although Title II of the DFA authorizes the Federal Deposit Insurance Corporation (FDIC) to resolve a failing nonbank firm in certain circumstances, DFA does not provide FDIC insurance for any of the nonbank firm's liabilities, nor does it provide the FDIC with funds to undertake a bailout. The FDIC is supposed to recover its costs from the failed firm's assets. Admittedly, DFA does allow for the possibility that the FDIC would need to assess other designated firms for part of the cost of a resolution. However, MetLife could as easily have been assessed to pay for another firm as it could have been the beneficiary of assessments on other systemically important firms.
A second potential weakness in the competitive advantage argument is that the U.S. Treasury Secretary decides to invoke FDIC resolution only after receiving a recommendation from the Federal Reserve Board and one other federal financial regulatory agency (depending upon the type of failing firm). Invocation of resolution is not automatic. Moreover, a part of any decision authorizing FDIC resolution are findings that at the time of authorization:
- the firm is in default or in danger of default,
- resolution under other applicable law (bankruptcy statutes) would have "serious adverse consequences" on financial stability, and
- those adverse effects could be avoided or mitigated by FDIC resolution.
Although it would seem logical that FSOC-designated firms are more likely to satisfy these criteria than other financial firms, the Title II criteria for FDIC resolution are the same for both types of firms.
How does this affect the Fed's regulation of nonbank firms?
Secretary of the Treasury Jack Lew has indicated his strong disagreement with the district court's decision, and the U.S. Treasury has said it will appeal. Suppose, however, that FSOC designation ultimately does become far more difficult. How significantly would that affect the Federal Reserve's regulatory power over nonbank financial firms?
Although the obvious answer would be that it would greatly reduce the Fed's regulatory power, recent experience casts some doubt on this view. Nonbank financial firms appear to regard FSOC designation as imposing costly burdens that substantially exceed any benefits they receive. Indeed, GE Capital viewed the costs as so significant that it had been selling large parts of its operations and recently petitioned the FSOC to rescind its designation. Unless systemically important activities are a core part of the firm's business model, nonbank financial firms may decide to avoid undertaking activities that would risk FSOC designation.
Thus, a plausible set of future scenarios is that the Federal Reserve would be supervising few, if any, nonbank financial firms regardless of the result of the MetLife case. Rather, ultimate resolution of the case may have more of an impact on whether large nonbank financial firms conduct systemically important activities (if designation becomes much harder) or the activities are conducted by some combination of smaller nonbank financial firms and by banks that are already subject to Fed regulation (if the ruling does not prevent future designations).
Regardless of how the courts and the FSOC respond to this recent judicial decision, the financial crisis should have taught us valuable lessons about the importance of the nonbank financial sector to financial stability. However, those lessons should go beyond merely the need to impose prudential supervision on any firms that are systemically important.
The cause of the financial crisis was not the failure of one or two large nonbank financial firms. Rather, the cause was that almost the entire financial system stood on the brink of collapse because almost all the major participants were heavily exposed to the weak credit standards that were pervasive in the residential real estate business. Yet if the real problem was the risk of multiple failures as a result of correlated exposures to a single large market, perhaps we ought to invest more effort in evaluating the riskiness of markets that could have systemic consequences.
In an article in Notes from the Vault and other forums, I have called for systematic end-to-end reviews of major financial markets starting with the origination of the risks and ending with the ultimate holder(s) of the risks. This analysis would involve both quantitative analysis of risk measures and qualitative analysis of the safeguards designed to reduce risk.
The primary goal would be to identify and try to correct weaknesses in the markets. A secondary goal would be to give the authorities a better sense of where problems are likely to arise if a market does encounter problems.
April 11, 2016
The Rise of Shadow Banking in China
China's banking system has suffered significant losses over the past two years, which has raised concerns about the health of China's financial industry. Such losses are perhaps not all that surprising. Commercial banks have been increasing their risk-taking activities in the form of shadow lending. See, for example, here, here, and here for some discussion of the evolution of China's shadow banking system.
The increase in risk taking by banks has occurred despite a rapid decline in money growth since 2009 and the People's Bank of China's efforts to limit credit expansions to real estate and other industries that appear to be over capacity.
One area of expanded activity has been investment in asset-backed "securities" by China's large non-state banks. This investment has created potentially significant risks to the balance sheets of these institutions (see the charts below). Using the micro-transaction-based data on shadow entrusted loans, Chen, Ren, and Zha (2016) have provided theoretical and empirical insights into this important issue (see also this Vox article that summarizes the paper).
Recent regulatory reforms in China have taken a positive step to try to limit such risk-taking behavior, although the success of these efforts remains to be seen. An even more challenging task lies ahead for designing a comprehensive and sustainable macroprudential framework to support the healthy functioning of China's traditional and shadow banking industries.
March 15, 2016
Collateral Requirements and Nonbank Online Lenders: Evidence from the 2015 Small Business Credit Survey
Businesses can secure a bank loan by offering collateral—typically a business asset such as equipment or real estate. However, the recently released 2015 Small Business Credit Survey (SBCS) Report on Employer Firms,conducted by seven regional Reserve Banks, found that 63 percent of business owners who had borrowed also used their personal assets or guarantee to secure financing. Surprisingly, the use of personal collateral was common not only among startups. Older and relatively larger small firms (see the following chart) also relied heavily on personal assets.
Source: 2015 Small Business Credit Survey
Note: "Unsure", "None", and "Other" were also options but are not shown on the chart.
Alternative lending options also exercised
Not every small business owner has sufficient hard assets, such as real estate or equipment, that can be used as collateral to secure a traditional bank loan or line of credit. For these circumstances, there are options such as credit cards and products offered by nonbank lenders (mostly operating online) that have less stringent underwriting requirements than banks. Many online nonbank lenders advertise unsecured loans or require only a general lien on business assets, without valuing those business assets.
In the 2015 SBCS, 20 percent of small firms seeking loans or lines of credit applied at nonbank online lenders. These lenders have a good reputation for quick application turnaround, and the collateral requirements can be looser than those applied by traditional lenders. But when borrowers were asked about their overall experience, only a net 15 percent of businesses approved at nonbank online lenders were satisfied (40.6 percent were satisfied and 25.3 percent were dissatisfied). In contrast, small banks received a relatively high net satisfaction score of 75 percent (see the chart).
Source: 2015 Small Business Credit Survey Report on Employer Firms
1 Satisfaction score is the share satisfied with lender minus the share dissatisfied.
2 "Online lenders" are defined as alternative and marketplace lenders, including Lending Club, OnDeck, CAN Capital, and PayPal Working Capital.
3 "Other" includes government loan funds and community development financial institutions.
The survey also showed that high interest rates were the primary reason for dissatisfaction at nonbank online lenders (see the chart).
Source: 2015 Small Business Credit Survey Report on Employer Firms
Note: Respondents could select multiple options. Select responses shown due to low observation count.
Merchant cash advances make advances
Most applicants to nonbank online lenders were seeking loans and lines of credit, but some were seeking a product that tends to be particularly expensive relative to other finance options: merchant cash advances (MCA). MCAs have been around for decades, but their popularity has risen in the wake of the financial crisis. Typically a lump-sum payment in exchange for a portion of future credit card sales, the terms of MCAs can be enticing because repayment seems easier than paying off a structured business loan that requires a fixed monthly payment. Instead, the lender is paid back as the business generates revenue, in theory making cash flow easier to manage.
One potential challenge for users of MCA products is interpreting the repayment terms. Instead of displaying an annual percentage rate (APR), MCAs are usually advertised with a "buy rate" (typically 1.2 to 1.4). For example, a buy rate of 1.3 on $100,000 would require the borrower to pay back $130,000. However, a percentage of the principal is not the same as an APR. The table below compares total interest payments made on a 1.3 MCA versus a 30 percent APR business loan repaid over 12 months and over six months. With a 12-month business loan, a 30 percent APR would equal total interest payments of roughly $17,000. With a six-month business loan, repayment would include about $9,000 in interest.
Because an MCA is structured as a commercial transaction instead of a loan, it is regulated by the Uniform Commercial Code in each state instead of by banking laws such as the Truth in Lending Act. Consequently, the provider does not have to follow all of the regulations and documentation requirements (such as displaying an APR) associated with making loans.
Converting a buy rate into an APR is not straightforward for many potential users, as was made clear in a recent online lending focus group study with small business owners conducted by the Cleveland Fed. When asked what the APR was on a $40,000 MCA that required a repayment of $52,000 (the same as a 1.3 buy rate), their answers were the following: (Product A is the MCA type of product; see the study for exactly how it was presented to respondents.)
Source: Federal Reserve Bank of Cleveland
The correct answer is that "it depends on how long it takes to pay back." For example, if the debt is repaid over six months, the APR would be 110 percent (as this calculator shows).
Nonbank online lenders can fill gaps in the borrowing needs of small business. But there may also be a role for greater clarity to ensure borrowers understand the terms they are signing up for. In a September 2015 speech, Federal Reserve Governor Lael Brainard highlights one self-policing movement already well under way:
Some have raised concerns about the high APRs associated with some online alternative lending products. Others have raised concerns about the risk that some small business borrowers may have difficulty fully understanding the terms of the various loan products or the risk of becoming trapped in layered debt that poses risks to the survival of their businesses. Some industry participants have recently proposed that online lenders follow a voluntary set of guidelines designed to standardize best practices and mitigate these risks. It is too soon to determine whether such efforts of industry participants to self-police will be sufficient. Even with these efforts, some have suggested a need for regulators to take a more active role in defining and enforcing standards that apply more broadly in this sector.
Many, but not all, nonbank online lenders have already signed the Small Business Borrower Bill of Rights. Results from the 2015 Small Business Credit Survey Report on Employer Firms can be found on our website.
May 13, 2014
Today’s news brings another indication that low inflation rates in the euro area have the attention of the European Central Bank. From the Wall Street Journal (Update: via MarketWatch):
Germany's central bank is willing to back an array of stimulus measures from the European Central Bank next month, including a negative rate on bank deposits and purchases of packaged bank loans if needed to keep inflation from staying too low, a person familiar with the matter said...
This marks the clearest signal yet that the Bundesbank, which has for years been defined by its conservative opposition to the ECB's emergency measures to combat the euro zone's debt crisis, is fully engaged in the fight against super-low inflation in the euro zone using monetary policy tools...
Notably, these tools apparently do not include Fed-style quantitative easing:
But the Bundesbank's backing has limits. It remains resistant to large-scale purchases of public and private debt, known as quantitative easing, the person said. The Bundesbank has discussed this option internally but has concluded that with government and corporate bond yields already quite low in Europe, the purchases wouldn't do much good and could instead create financial stability risks.
Should we conclude that there is now a global conclusion about the value and wisdom of large-scale asset purchases, a.k.a. QE? We certainly have quite a bit of experience with large-scale purchases now. But I think it is also fair to say that that experience has yet to yield firm consensus.
You probably don’t need much convincing that QE consensus remains elusive. But just in case, I invite you to consider the panel discussion we titled “Greasing the Skids: Was Quantitative Easing Needed to Unstick Markets? Or Has it Merely Sped Us toward the Next Crisis?” The discussion was organized for last month’s 2014 edition of the annual Atlanta Fed Financial Markets Conference.
Opinions among the panelists were, shall we say, diverse. You can view the entire session via this link. But if you don’t have an hour and 40 minutes to spare, here is the (less than) ten-minute highlight reel, wherein Carnegie Mellon Professor Allan Meltzer opines that Fed QE has become “a foolish program,” Jeffries LLC Chief Market Strategist David Zervos declares himself an unabashed “lover of QE,” and Federal Reserve Governor Jeremy Stein weighs in on some of the financial stability questions associated with very accommodative policy:
You probably detected some differences of opinion there. If that, however, didn’t satisfy your craving for unfiltered debate, click on through to this link to hear Professor Meltzer and Mr. Zervos consider some of Governor Stein’s comments on monitoring debt markets, regulatory approaches to pursuing financial stability objectives, and the efficacy of capital requirements for banks.
By Dave Altig, executive vice president and research director of the Atlanta Fed.
TrackBack URL for this entry:
Listed below are links to blogs that reference Pondering QE:
- Hitting a Cyclical High: The Wage Growth Premium from Changing Jobs
- Thoughts on a Long-Run Monetary Policy Framework, Part 4: Flexible Price-Level Targeting in the Big Picture
- Thoughts on a Long-Run Monetary Policy Framework, Part 3: An Example of Flexible Price-Level Targeting
- Thoughts on a Long-Run Monetary Policy Framework, Part 2: The Principle of Bounded Nominal Uncertainty
- Thoughts on a Long-Run Monetary Policy Framework: Framing the Question
- What Are Businesses Saying about Tax Reform Now?
- A First Look at Employment
- Weighting the Wage Growth Tracker
- GDPNow's Forecast: Why Did It Spike Recently?
- How Low Is the Unemployment Rate, Really?
- April 2018
- March 2018
- February 2018
- January 2018
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- May 2017
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Economic conditions
- Economic Growth and Development
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Health Care
- Inflation Expectations
- Interest Rates
- Labor Markets
- Latin America/South America
- Monetary Policy
- Money Markets
- Real Estate
- Saving, Capital, and Investment
- Small Business
- Social Security
- This, That, and the Other
- Trade Deficit
- Wage Growth