The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.
- BLS Handbook of Methods
- Bureau of Economic Analysis
- Bureau of Labor Statistics
- Congressional Budget Office
- Economic Data - FRED® II, St. Louis Fed
- Office of Management and Budget
- Statistics: Releases and Historical Data, Board of Governors
- U.S. Census Bureau Economic Programs
- White House Economic Statistics Briefing Room
November 21, 2019
Private and Central Bank Digital Currencies
The Atlanta Fed recently hosted a workshop, "Financial System of the Future," which was cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. This macroblog post discusses the workshops discussion of digital currency, including Bitcoin, Libra, and central bank digital currency (CBDC). A companion Notes from the Vault post provides some highlights from the rest of the workshop.
The introduction of Bitcoin has sparked considerable interest in cryptocurrencies since its introduction in the 2008 paper "Bitcoin: A Peer-to-Peer Electronic Cash System" by Satoshi Nakamoto. However, for all its success, Bitcoin is not close to becoming a widely accepted electronic cash system. Why it has yet to achieve its original goals is the topic of a paper by New York University professors Franz Hinzen and Kose John, along with McGill University professor Fahad Saleh titled "Bitcoin's Fatal Flaw: The Limited Adoption Problem."
Their paper suggests that the inability of Bitcoin to achieve wider adoption is the result of the interaction of three features: the need for agreement on ledger contents (in blockchain terminology, "consensus"), free entry for creating new blocks (permissionless or decentralized), and an artificial supply constraint. The supply constraint means that an increase in demand leads to higher Bitcoin prices. Such a valuation increase expands the network seeking to create new blocks (that is, increases the number of Bitcoin "miners"). But an increase in the network size slows the consensus process as it takes time for newly created blocks to reach all of the miners across the internet. The end result is an increase in the time needed to make a payment, reducing the value of Bitcoin as a means of payment—a significant consideration, obviously, for any type of currency.
As an alternative to the Bitcoin consensus protocol, they suggest a public, permissioned blockchain that results in faster transactions because it imposes limits on who can create new blocks. In their system, new blocks would be selected based on a weighted vote based on the blockchain's cyptocurrency held by validators (in other words, approved block creators). If validators were to approve new and malicious blocks, that would erode the value of the validator's existing cryptocurrency holdings and thus provide an incentive to behave honestly.
Federal Reserve Bank of Atlanta visiting economist Warren Weber presented some work with me on Libra, the new digital coin proposed by Facebook. Weber began by pointing to another problem with using Bitcoin in payments: the cryptocurrency's volatile value. Libra solves this problem by proposing to hold a portfolio of assets denominated in sovereign currencies, such as the U.S. dollar, that will provide one-for-one backing of the value of Libra. This approach is similar to that taken by some other "stablecoins," with the exception that Libra proposes to be stable relative to an index of several currencies whereas other stablecoins are designed to be stable with respect to only one sovereign currency.
Drawing on his background in economic history, Weber observes that introducing a new private currency is hard, but not impossible. For example, he pointed to the Stockholm Bank notes issued in Sweden in the 1660s. These notes worked because they were more convenient than the alternatives used in that country. The fact that other U.S. payments systems are heavily bank-based might afford an advantage to Libra.
Although no one is certain of the public's interest in using Libra, policymakers around the world have taken considerable interest in the potential implications of Libra for monetary policy and financial regulation. Could Libra significantly reduce the use of the domestic sovereign currencies in some countries, thus reducing the effectiveness of monetary policy? How might financial institutions providing Libra-based services be regulated?
One of the other possible policy responses to Libra is central banks' introduction of digital currency. Economists Itai Agur, Anil Ari, and Giovanni Dell'Ariccia from the International Monetary Fund consider some of the issues in developing a CBDC in their paper "Designing Central Bank Digital Currencies." They start by observing some important differences between cash and bank deposits. Cash is completely anonymous in that it reveals nothing about the identity of the payer. However, lost or stolen cash can't be recovered, so it lacks security. Deposits have the opposite properties—they are not anonymous, but there is a mechanism to recover lost or stolen funds.
The paper develops a model in which CBDC can be designed to operate at multiple points on a continuum between deposits and cash. The key concern from a public policy perspective is that the more CBDC operates like bank deposits, the more it will depress bank credit and output. However, if the CBDC operates too much like paper currency, then it could supplant paper currency and eliminate a payments method that some individuals prefer. The paper proposes that CBDC be designed to look more like currency to minimize the extent to which CBDC replaces bank deposits. The problem then becomes how to avoid CBDC reducing the usage of cash to the point where cash is no longer viable. (For example, merchants could decide to stop accepting cash because they find that the few transactions using cash do not justify the costs of accepting it.) The way the paper proposes to keep CBDC from being too attractive relative to cash by applying a negative interest rate to the CBDC. The result would be that those who most highly value CBDC will use it, but the negative rate will likely deter enough people so that cash remains a viable payments mechanism.
January 04, 2018
Financial Regulation: Fit for New Technologies?
In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.
Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.
As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.
Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.
This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.
John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.
On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.
Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.
Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.
It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.
January 03, 2018
Is Macroprudential Supervision Ready for the Future?
Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.
In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.
Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.
The question of how to determine when to deploy a macroprudential tool is the subject of a paper by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.
Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.
Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.
The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.
Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper he coauthored looking at the recent effectiveness of two such tools in Ireland.
In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.
Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.
August 11, 2016
Forecasting Loan Losses for Stress Tests
Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.
The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.
The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.
To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.
Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.
With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.
My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.
One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.
Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.
Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.
Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.
June 06, 2016
After the Conference, Another Look at Liquidity
When it comes to assessing the impact of central bank asset purchase programs (often called quantitative easing or QE), economists tend to focus their attention on the potential effects on the real economy and inflation. After all, the Federal Reserve's dual mandate for monetary policy is price stability and full employment. But there is another aspect of QE that may also be quite important in assessing its usefulness as a policy tool: the potential effect of asset purchases on financial markets through the collateral channel.
Asset purchase programs involve central bank purchases of large quantities of high-quality, highly liquid assets. Postcrisis, the Fed has purchased more than $3 trillion of U.S. Treasury securities and agency mortgage-backed securities, the European Central Bank (ECB) has purchased roughly 727 billion euros' worth of public-sector bonds (issued by central governments and agencies), and the Bank of Japan is maintaining an annual purchase target of 80 trillion yen. These bonds are not merely assets held by investors to realize a return; they are also securities highly valued for their use as collateral in financial transactions. The Atlanta Fed's 21st annual Financial Markets Conference explored the potential consequences of these asset purchase programs in the context of financial market liquidity.
The collateral channel effect focuses on the role that these low-risk securities play in the plumbing of U.S. financial markets. Financial firms fund a large fraction of their securities holdings in the repurchase (or repo) markets. Repurchase agreements are legally structured as the sale of a security with a promise to repurchase the security at a fixed price at a given point in the future. The economics of this transaction are essentially similar to those of a collateralized loan.
The sold and repurchased securities are often termed "pledged collateral." In these transactions, which are typically overnight, the lender will ordinarily lend cash equal to only a fraction of the securities value, with the remaining unfunded part called the "haircut." The size of the haircut is inversely related to the safety and liquidity of the security, with Treasury securities requiring the smallest haircuts. When the securities are repurchased the following day, the borrower will pay back the initial cash plus an additional amount known as the repo rate. The repo rate is essentially an overnight interest rate paid on a collateralized loan.
Central bank purchases of Treasury securities may have a multiplicative effect on the potential efficiency of the repo market because these securities are often used in a chain of transactions before reaching a final holder for the evening. Here's a great diagram presented by Phil Prince of Pine River Capital Management illustrating the role that bonds and U.S. Treasuries play in facilitating a variety of transactions. In this example, the UST (U.S. Treasury) securities are first used as collateral in an exchange between the UST securities lender and the globally systemically important financial institution (GSIFI bank/broker dealer), then between the GSIFI bank and the cash provider, a money market mutual fund (MMMF), corporation, or sovereign wealth fund (SWF). The reuse of the UST collateral reduces the funding cost of the GSIFI bank and, hence, the cost to the levered investor/hedge fund who is trying to exploit discrepancies in the pricing of a corporate bond and stock.
Just how important or large is this pool of reusable collateral? Manmohan Singh of the International Monetary Fund presented the following charts, depicting the pledged collateral at major U.S. and European financial institutions that can be reused in other transactions.
So how do central bank purchases of high-quality, liquid assets affect the repo market—and why should macroeconomists care? In his presentation, Marvin Goodfriend of Carnegie Mellon University concluded that central bank asset purchases, which he terms "pure monetary policy," lower short-term interest rates (especially bank-to-bank lending) but increase the cost of funding illiquid assets through the repo market. And Singh noted that repo rates are an important part of the constellation of short-term interest rates and directly link overnight markets with the longer-term collateral being pledged. Thus, the interaction between a central bank's interest-rate policy and its balance sheet policy is an important aspect of the transmission of monetary policy to longer-term interest rates and real economic activity.
Ulrich Bindseil, director of general market operations at the ECB, discussed a variety of ways in which central bank actions may affect, or be affected by, bond market liquidity. One way that central banks may mitigate any adverse impact on market liquidity is through their securities lending programs, according to Bindseil. Central banks use such programs to lend particular bonds back out to the market to "provide a secondary and temporary source of securities to the financing market...to promote smooth clearing of Treasury and Agency securities."
On June 2, for example, the New York Fed lent $17.8 billion of UST securities from the Fed's portfolio. These operations are structured as collateral swaps—dealers pledge other U.S. Treasury bonds as collateral with the Fed. During the financial crisis, the Federal Reserve used an expanded version of its securities lending program called the Term Securities Lending Facility to allow firms to replace lower-quality collateral that was difficult to use in repo transactions with Treasury securities.
Finally, the Fed currently releases some bonds to the market each day in return for cash, through its overnight reverse repo operations, a supplementary facility used to support control of the federal funds rate as the Federal Open Market Committee proceeds with normalization. However, this release has an important limitation: these operations are conducted in the triparty repo market, and the bonds released through these operations can be reused only within that market. In contrast, if the Fed were to sell its U.S. Treasuries, the securities could not only be used in the triparty repo market but also as collateral in other transactions including ones in the bilateral repo market (you can read more on these markets here). As long as central bank portfolios remain large and continue to grow as in Europe and Japan, policymakers are integrally linked to the financial plumbing at its most basic level.
To see a video of the full discussion of these issues as well as other conference presentations on bond market liquidity, market infrastructure, and the management of liquidity within financial institutions, please visit Getting a Grip on Liquidity: Markets, Institutions, and Central Banks. My colleague Larry Wall's conference takeaways on the elusive definition of liquidity, along with the impact of innovation and regulation on liquidity, are here.
December 04, 2013
Is (Risk) Sharing Always a Virtue?
The financial system cannot be made completely safe because it exists to allocate funds to inherently risky projects in the real economy. Thus, an important question for policymakers is how best to structure the financial system to absorb these losses while minimizing the risk that financial sector failures will impair the real economy.
Standard theories would predict that one good way of reducing financial sector risk is diversification. For example, the financial system could be structured to facilitate the development of large banks, a point often made by advocates for big banks such as Steve Bartlett. Another, not mutually exclusive, way of enhancing diversification is to create a system that shares risks across banks. An example is the Dodd-Frank Act mandate requiring formerly over-the-counter derivatives transactions to be centrally cleared.
However, do these conclusions based on individual bank stability necessarily imply that risk sharing will make the financial system safer? Is it even relevant to the principal risks facing the financial system? Some of the papers presented at the recent Atlanta Fed conference, "Indices of Riskiness: Management and Regulatory Implications," broadly addressed these questions and others. Other papers discuss the impact of bank distress on local economies, methods of predicting bank failure, and various aspects of incentive compensation paid to bankers (which I discuss in a recent Notes from the Vault).
The stability implications of greater risk sharing across banks are explored in "Systemic Risk and Stability in Financial Networks" by Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. They develop a theoretical model of risk sharing in networks of banks. The most relevant comparison they draw is between what they call a "complete financial network" (maximum possible diversification) and a "weakly connected" network in which there is substantial risk sharing between pairs of banks but very little risk sharing outside the individual pairs. Consistent with the standard view of diversification, the complete networks experience few, if any, failures when individual banks are subject to small shocks, but some pairs of banks do fail in the weakly connected networks. However, at some point the losses become so large that the complete network undergoes a phase transition, spreading the losses in a way that causes the failure of more banks than would have occurred with less risk sharing.
Extrapolating from this paper, one could imagine that risk sharing could induce a false sense of security that would ultimately make a financial system substantially less stable. At first a more interconnected system shrugs off smaller shocks with seemingly no adverse impact. This leads bankers and policymakers to believe that the system can handle even more risk because it has become more stable. However, at some point the increased risk taking leads to losses sufficiently large to trigger a phase transition, and the system proves to be even less stable than it was with weaker interconnections.
While interconnections between financial firms are a theoretically important determinant of contagion, how important are these connections in practice? "Financial Firm Bankruptcy and Contagion," by Jean Helwege and Gaiyan Zhang, analyzes the spillovers from distressed and failing financial firms from 1980 to 2010. Looking at the financial firms that failed, they find that counterparty risk exposure (the interconnections) tend to be small, with no single exposure above $2 billion and the average a mere $53.4 million. They note that these small exposures are consistent with regulations that limit banks' exposure to any single counterparty. They then look at information contagion, in which the disclosure of distress at one financial firm may signal adverse information about the quality of a rival's assets. They find that the effect of these signals is comparable to that found for direct credit exposure.
Helwege and Zhang's results suggest that we should be at least as concerned about separate banks' exposure to an adverse shock that hits all of their assets as we should be about losses that are shared through bank networks. One possible common shock is the likely increase in the level and slope of the term structure as the Federal Reserve begins tapering its asset purchases and starts a process ultimately leading to the normalization of short-term interest rate setting. Although historical data cannot directly address banks' current exposure to such shocks, such data can provide evidence on banks' past exposure. William B. English, Skander J. Van den Heuvel, and Egon Zakrajšek presented evidence on this exposure in the paper "Interest Rate Risk and Bank Equity Valuations." They find a significant decrease in bank stock prices in response to an unexpected increase in the level or slope of the term structure. The response to slope increases (likely the primary effect of tapering) is somewhat attenuated at banks with large maturity gaps. One explanation for this finding is that these banks may partially recover their current losses with gains they will accrue when booking new assets (funded by shorter-term liabilities).
Overall, the papers presented in this part of the conference suggest that more risk sharing among financial institutions is not necessarily always better. Even though it may provide the appearance of increased stability in response to small shocks, the system is becoming less robust to larger shocks. However, it also suggests that shared exposures to a common risk are likely to present at least as an important a threat to financial stability as interconnections among financial firms, especially as the term structure and the overall economy respond to the eventual return to normal monetary policy. Along these lines, I recently offered some thoughts on how to reduce the risk of large widespread losses due to exposures to a common (credit) risk factor.
By Larry Wall, director of the Atlanta Fed's Center for Financial Innovation and Stability
Note: The conference "Indices of Riskiness: Management and Regulatory Implications" was organized by Glenn Harrison (Georgia State University's Center for the Economic Analysis of Risk), Jean-Charles Rochet, (University of Zurich), Markus Sticker, Dirk Tasche (Bank of England, Prudential Regulatory Authority), and Larry Wall (the Atlanta Fed's Center for Financial Innovation and Stability).
TrackBack URL for this entry:
Listed below are links to blogs that reference Is (Risk) Sharing Always a Virtue?:
April 22, 2013
Too Big to Fail: Not Easily Resolved
As Fed Chairman Ben Bernanke has indicated, too-big-to-fail (TBTF) remains a major issue that is not solved, but “there’s a lot of work in train.” In particular, he pointed to efforts to institute Basel III capital standards and the orderly liquidation authority in Dodd-Frank. The capital standards seek to lower the probability of insolvency in times of financial stress, while the liquidation authority attempts to create a credible mechanism to wind down large institutions if necessary. The Atlanta Fed’s flagship Financial Markets Conference (FMC) recently addressed various issues related to both of these regulatory efforts.
The Basel capital standards are a series of international agreements on capital requirements reached by the Basel Committee on Banking Supervision. These requirements are referred to as “risk-weighted” because they tie the required amount of bank capital to an estimate of the overall riskiness of each bank’s portfolio. Put simply, riskier banks need to hold more capital under this system.
The first iteration of the Basel requirements, known as Basel I, required only 30 pages of regulation. But over time, banks adjusted their portfolios in response to the relatively simple risk measures in Basel I, and these measures became insufficient to characterize bank risk. The Basel Committee then shifted to a more complex system called Basel II, which allows the most sophisticated banks to estimate their own internal risk models subject to supervisory approval and use these models to calculate their required capital. After the financial crisis, supervisors concluded that Basel II did not require enough capital for certain types of transactions and agreed that a revised version called Basel III should be implemented.
At the FMC, Andrew Haldane from the Bank of England gave a fascinating recap of the Basel capital standards as a part of a broader discussion on the merits of complex regulation. His calculations show that the Basel accords have become vastly more complex, with the number of risk weights applied to bank positions increasing from only five in the Basel I standards to more than 200,000 in the current Basel III standards.
Haldane argued that this increase in complexity and reliance on banks’ internal risk models has unfortunately not resulted in a fair or credible system of capital regulation. He pointed to supervisory studies revealing wide disparities across banks in their estimated capital requirements for a hypothetical common portfolio. Further, Haldane pointed to a survey of investors by Barclays Capital in 2012 showing, not surprisingly, that investors do not put a great deal of trust in the Basel weightings.
So is the problem merely that the Basel accords have taken the wrong technical approach to risk measurement? The conclusion of an FMC panel on risk measurement is: not necessarily. The real problem is that estimating a bank’s losses in unlikely but not implausible circumstances is at least as much an art as it is a science.
Til Schuermann of Oliver Wyman gave several answers to the question “Why is risk management so hard?” including the fact that we (fortunately) don’t observe enough bad events to be able to make good estimates of how big the losses could become. As a result, he said, much of what we think we know from observations in good times is wrong when big problems hit: we estimate the wrong model parameters, use the wrong statistical distributions, and don’t take account of deteriorating relationships and negative feedback loops.
David Rowe of David M. Rowe Risk Advisory gave an example of why crisis times are different. He argued that the large financial firms can absorb some of the volatility in asset prices and trading volumes in normal times, making the financial system appear more stable. However, during crises, the large movements in asset prices can swamp even these large players. Without their shock absorption, all of the volatility passes through to the rest of the financial system.
The problems with risk measurement and management, however, go beyond the technical and statistical problems. The continued existence of TBTF means that the people and institutions that are best placed to measure risk—banks and their investors—have far less incentive to get it right than they should. Indeed, with TBTF, risk-based capital requirements can be little more than costly constraints to be avoided to the maximum extent possible, such as by “optimizing” model estimates and portfolios to reduce measured risk under Basel II and III. However, if a credible resolution mechanism existed and failure was a realistic threat, then following the intent of bank regulations would become more consistent with the banks’ self-interest, less costly, and sometimes even nonbinding.
Progress on creating such a mechanism under Dodd-Frank has been steady, if slow. Arthur Murton of the Federal Deposit Insurance Corporation (FDIC) presented, as a part of a TBTF panel, a comprehensive update on the FDIC’s planning process for making the agency’s new Orderly Liquidation Authority functional. The FDIC’s plans for resolving systemically important nonbank financial firms (including the parent holding company of large banks) is to write off the parent company’s equity holders and then use its senior and subordinated debt to absorb any remaining losses and recapitalize the parent. The solvent operating subsidiaries of the failed firm would continue in normal operation.
Importantly, though, the FDIC may exercise its new power only if both the Treasury and Federal Reserve agree that putting a firm that is in default or in danger of default into judicial bankruptcy would have seriously adverse effects on U.S. financial stability. And this raises a key question: why isn’t bankruptcy a reasonable option for these firms?
Keynote speaker John Taylor and TBTF session panelist Kenneth Scott—both Stanford professors—argued that in fact bankruptcy is a reasonable option, or could be, with some changes. They maintain that creditors could better predict the outcome of judicial bankruptcy than FDIC-administered resolution. And predictability of outcomes is key for any mechanism that seeks to resolve financial firms with as little damage as possible to the broader financial system.
Unfortunately, some of the discussion during the TBTF panel also made it apparent that Chairman Bernanke is right: TBTF has not been solved. The TBTF panel discussed several major unresolved obstacles, including the complications of resolving globally active financial firms with substantial operations outside the United States (and hence outside both the FDIC and the U.S. bankruptcy court’s control) and the problem of dealing with many failing systemically important financial institutions at the same time, as is likely to occur in a crisis period. (A further commentary on these two obstacles is available in an earlier edition of the Atlanta Fed’s Notes from the Vault.)
Thus, the Atlanta Fed’s recent FMC highlighted both the importance of ending TBTF and the difficulty of doing so. The Federal Reserve continues to work with the FDIC to address the remaining problems. But until TBTF is a “solved” problem, what to do about these financial firms should and will remain a front-burner issue in policy circles.
By Paula Tkac, vice president and senior economist, and
Larry Wall, director of the Center for Financial Innovation and Stability, both in the Atlanta Fed’s research department
TrackBack URL for this entry:
Listed below are links to blogs that reference Too Big to Fail: Not Easily Resolved:
May 31, 2012
What is shadow banking?
"Shadow banking is a market-funded, credit intermediation system involving maturity and/or liquidity transformation through securitization and secured-funding mechanisms. It exists at least partly outside of the traditional banking system and does not have government guarantees in the form of insurance or access to the central bank."
As the Deloitte study makes clear, this definition is fairly narrow—it doesn't, for example, include hedge funds. Though Deloitte puts the size of the shadow banking sector at $10 trillion in 2010, other well-known measures range from $15 trillion to $24 trillion. (One of those alternative estimates comes from an important study by Zoltan Pozsar, Tobias Adrian, Adam Ashcraft, and Hayley Boesky from the New York Fed.)
What definition of shadow banking you prefer probably depends on the questions you are trying to answer. Since the interest in shadow banking today is clearly motivated by the financial crisis and its regulatory aftermath, a definition that focuses on systemically risky institutions has a lot of appeal. And not all entities that might be reasonably put in the shadow banking bucket fall into the systemically risky category. Former PIMCO Senior Partner Paul McCulley offered this perspective at the Atlanta Fed's recent annual Financial Markets Conference (video link here):
"...clearly, the money market mutual fund, that 2a-7 fund as it's known here in the United States, is the bedrock of the shadow banking system...
"The money market mutual fund industry is a huge industry and poses massive systemic risk to the system because it's subject to runs, because it's not just as good as an FDIC bank deposit. We found out that in spades in 2008...
"In fact, I can come up with an example of shadow banking that really didn't have a deleterious effect in 2008, and that was hedge funds with very long lockups on their liability. So hedge funds are shadow banks that are levered up intermediaries, but by having long lockups on their liabilities, then they weren't part and parcel of a run because they were locked up."
The more narrow Deloitte definition is thus very much in the spirit of the systemic risk definition. But even though this measure does not cover all the shadow banking activities with which policymakers might be concerned, other measures of the trend in the size of the sector look pretty much like the one below, which is from the Deloitte report:
The Deloitte report makes this sensible observation regarding the decline in the size of the shadow banking sector:
"Does this mean that the significance of the shadow banking system is overrated? No. The growth of shadow banking was fueled historically by financial innovation. A new activity not previously created could be categorized as shadow banking and could creep back into the system quickly. That new innovation might be but a distant notion at best in someone's mind today, but could pose a systemic risk concern in the future."
Ed Kane, another participant in our recent conference, went one step further with a familiar theme of his: new shadows are guaranteed to emerge, as part of the "regulatory dialectic"—an endless cycle of regulation and market innovation.
In getting to the essence of what the future of shadow banking will (or should) be, I think it is instructive to consider a set of questions that were posed at the conference by Washington University professor Phil Dybvig. I'm highlighting three of his five questions here:
"1. Is creation of liquidity by banks surplus liquidity in the economy or does it serve a useful economic purpose?
"2. How about creation of liquidity by the shadow banking sector? Was it surplus? Did it represent liquidity banks could have provided?...
"5. If there was too much liquidity in the economy, why? Some people have argued that it was because of too much stimulus and the government kept interest rates too low (and perhaps the Chinese government had a role as well as the US government). I don't want to take a side on these claims, but it is an important empirical question whether the explosion of the huge shadow banking sector was a distortion that was an unintended side effect of policy or whether it is an essential feature of a healthy economy."
Virtually all regulatory reforms will entail costs (some of them unintended), as well as benefits. Sensible people may come to quite different conclusions about how the scales tip in this regard. A good example is provided by the debate from another session at our conference on reform of money market mutual funds between Eric Rosengren, president of the Boston Fed, and Karen Dunn Kelley of Invesco. And we could see proposals by the Securities and Exchange Commission in the future to enact further reforms to the money market mutual fund industry. But whether any of these efforts are durable solutions to the systemic risk profile of the shadow banking sector must surely depend on the answers to Phil Dybvig's important questions.
By Dave Altig, executive vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference What is shadow banking?:
December 16, 2011
Maybe this time was at least a little different?
Earlier this week, Derek Thomson, a senior editor at The Atlantic, began his article "The Graph That Proves Economic Forecasters Are Almost Always Wrong" with some observations that don't really require a graph:
"As the saying goes: 'It's hard to make predictions. Especially about the future.' Thirty years ago, it was obvious to everybody that oil prices would keep going up forever. Twenty years ago, it was obvious that Japan would own the 21st century. Ten years ago, it was obvious that our economic stewards had mastered a kind of thermostatic control over business cycles to prevent great recessions. We were wrong, wrong, and wrong."
In a recent speech, Dennis Lockhart—whom most of you recognize as president here at the Atlanta Fed—offered his own thoughts on why forecasts can go so wrong:
"… you may wonder why forecasters, the Fed included, don't do a better job. To answer this question, let me suggest three reasons why forecasts may be off.
"While it's relatively trivial in my view, the first reason involves missing the timing of economic activity. An example of that was mentioned earlier when I explained that GDP for the third quarter had been revised down while the fourth quarter is expected to compensate.
"A second reason that forecasts miss the mark is, in everyday language, stuff happens.
"To be a little more precise, unforeseen developments are a fact of life. In my view, the energy and commodity shocks early in the year had a significant impact on growth in the first half of 2011. The tsunami-related supply disruptions, though temporary, were an exacerbating factor. In fact, a lot of shocks or disruptions are quite temporary and don't cause one to rethink the narrative about where the economy is likely going.
"Which brings me to the third reason why economic prognostications go off track: we, as forecasters, simply get the bigger story wrong.
"What I mean by getting the bigger story wrong is failing to understand the fundamentals at work in the economy."
"Getting the bigger story wrong" is Simon Potter's theme in the New York Fed's Liberty Street Economics blog post, "The Failure to Forecast the Great Recession":
"Looking through our briefing materials and other sources such as New York Fed staff reports reveals that the Bank's economic research staff, like most other economists, were behind the curve as the financial crisis developed, even though many of our economists made important contributions to the understanding of the crisis. Three main failures in our real-time forecasting stand out:
|1.||Misunderstanding of the housing boom …|
|2.||A lack of analysis of the rapid growth of new forms of mortgage finance …|
|3.||Insufficient weight given to the powerful adverse feedback loops between the financial system and the real economy …|
"However, the biggest failure was the complacency resulting from the apparent ease of maintaining financial and economic stability during the Great Moderation."
Potter does not implicate any of his Federal Reserve brethren, but you can add me to the roll call of those having made each of the mistakes on the list.
Should we have known? A powerful narrative that we should have has taken hold. The boom-bust cycle associated with large bouts of asset appreciation and debt accumulation has a long history in economics, and the theme has been pressed home in its most recent incarnation by the work of Carmen Reinhart and coauthors, including the highly influential book written with Kenneth Rogoff, This Time is Different: Eight Centuries of Financial Folly.
Unfortunately, even seemingly compelling historical evidence is not always so clear cut. An illustration of this, relevant to the failure to forecast the Great Recession, was provided in a paper by Enrique Mendoza and Marco Terrones (from the University of Maryland and the International Monetary Fund, respectively), presented last month at a Central Bank of Chile conference, "Capital Mobility and Monetary Policy." What the paper puts forward is described by Mendoza and Terrones as follows:
"… in Mendoza and Terrones (2008) we proposed a new methodology for measuring and identifying credit booms and showed that it was successful in identifying credit boom events with a clear cyclical pattern in both macro and micro data.
"The method we proposed is a 'thresholds method.' This method works by first splitting real credit per capita in each country into its cyclical and trend components, and then identifying a credit boom as an episode in which credit exceeds its long-run trend by more than a given 'boom' threshold, defined in terms of a tail probability event… The key defining feature of this method is that the thresholds are proportional to each country's standard deviation of credit over the business cycle. Hence, credit booms reflect 'unusually large' cyclical credit expansions."
And here is what they find:
"In this paper, we apply this method to data for 61 countries (21 industrialized countries, ICs, and 40 emerging market economies, EMs), over the 1960-2010 period. We found a total of 70 credit booms, 35 in ICs and 35 in EMs, including 16 credit booms that peaked in the critical period surrounding the recent financial crisis between 2007 and 2010 (again with about half of these recent booms in ICs and EMs each)…
"The results show that credit booms are associated with periods of economic expansion, rising equity and housing prices, real appreciation and widening external deficits in the upswing of the booms, followed by the opposite dynamics in the downswing."
That certainly sounds familiar, and supports the "we should have known" meme. But the full facts are a little trickier. Mendoza and Terrones continue:
"A major deviation in the evidence reported here relative to our previous findings in Mendoza and Terrones (2008) is that adding the data from the recent credit booms and crisis we find that in fact credit booms in ICs and EMs are more similar than different. In contrast, in our earlier work we found differences in the magnitudes of credit booms, the size of the macroeconomic fluctuations associated with credit booms, and the likelihood that they are followed by banking or currency crises.
"… while not all credit booms end in crisis, the peaks of credit booms are often followed by banking crises, currency crises of Sudden Stops, and the frequency with which this happens is about the same for EMs and ICs (20 to 25 percent for banking and currency for banking crisis, 14 percent for Sudden Stops)."
Their notion still supports the case of the "we should have known" camp, but here's the rub (emphasis mine):
"This is a critical change from our previous findings, because lacking substantial evidence from all the recent booms and crises, we had found only 9 percent frequency of banking crises after credit booms for EMs and zero for ICs, and 14 percent frequency of currency crises after credit booms for EMs v. 31 percent for ICs."
In other words, based on this particular evidence, we should have been looking for a run on the dollar, not a banking crisis. What we got, of course, was pretty much the opposite.
No excuses here. Speaking only for myself, I had the story wrong. But the conclusion to that story is a lot clearer now than it was in the middle of the tale.
By Dave Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference Maybe this time was at least a little different?:
December 02, 2011
The ongoing lender of last resort debate
Two days do not a policy success make, and it is a fool's game to tie the merits of a policy action to a short-term stock market cycle. But at first blush it does certainly appear that Wednesday's announcement of coordinated central bank actions to provide liquidity support to the global financial system had a positive effect. The policy is described in the Board of Governors press release:
"The Bank of Canada, the Bank of England, the Bank of Japan, the European Central Bank, the Federal Reserve, and the Swiss National Bank… have agreed to lower the pricing on the existing temporary U.S. dollar liquidity swap arrangements by 50 basis points so that the new rate will be the U.S. dollar overnight index swap (OIS) rate plus 50 basis points. This pricing will be applied to all operations conducted from December 5, 2011. The authorization of these swap arrangements has been extended to February 1, 2013."
"Under the program, the Fed lends dollars to other central banks, which in turn make the dollars available to banks under their jurisdiction. The action Wednesday made these emergency Fed loans cheaper, lowering their cost by half a percentage point.
"When the Fed launched the swap lines, it saw them as critical to its efforts to tame the financial storm sweeping the globe. Banks in Europe and elsewhere hold U.S. mortgage securities and other U.S. dollar securities. They get U.S. dollars in short-term lending markets to pay for these holdings. In 2008, when dollar loans became scarce, foreign banks were forced to dump their holdings of U.S. mortgages and other loans, which in turn pushed up the cost of credit for Americans.
"The latest action was at least in part an attempt to head off a repeat of such a spiral."
It is at least interesting that this most recent Fed action occurs as criticism of its past actions to address the financial crisis has once again arisen. The immediate driver is another installment in a series of Bloomberg reports that parse recently released details from Fed lending programs during the period from 2007 to 2009.
I have in the past objected to the somewhat conspiratorial tone in which the Bloomberg folks have chosen to cast the conversation. I certainly do not, however, think it objectionable to have a cool-headed conversation on what we can learn from the Fed's actions during the financial crisis and how it might inform policy going forward. Following on the latest Bloomberg article, Felix Salmon and Brad DeLong have taken up that cause.
It may be useful to start with my institution's official answers to the question: Why did the Federal Reserve lend to banks and other financial institutions during the financial crisis?
"Intense strains in financial markets during the financial crisis severely disrupted the flow of credit to U.S. households and businesses and led to a deep downturn in economic activity and a sharp increase in unemployment. Consistent with its statutory mandate to foster maximum employment and stable prices, the Federal Reserve established lending programs during the crisis to address the strains in financial markets, support the flow of credit to American families and firms, and foster economic recovery."
Neither Salmon nor DeLong argues with this assertion, and even the Bloomberg article includes commentary broadly supporting Fed actions, even if not all details of the implementation. More controversial is this observation from the Fed's frequently asked questions (FAQs):
"The Federal Reserve followed sound risk-management practices under all of its liquidity and credit programs. Credit provided under these programs was fully collateralized to protect the Fed—and ultimately the taxpayer—from loss."
Here is where opinions start to diverge. From DeLong:
"In the fall of 2008, counting the Fed and the Treasury together, a peak of 90% of Morgan Stanley's equity—the capital of the firm genuinely at risk—was U.S. government money. That money was genuinely at risk: had Morgan Stanley's assets taken another dive in value and blown through the private-sector's minimal equity cushion, it would have been taxpayers whose money would have been used to pay off the firm's more senior liabilities. 'Fully collateralized' the loans may have been, but had anything impaired that collateral there was no way on God's Green Earth Morgan Stanley—or any of the other banks—could have come up with the money to make the government whole."
And from Salmon:
"The Fed likes to say that it wasn't taking much if any credit risk here: that all its lending was fully collateralized, etc etc. But it's really hard to look at that red line and have a huge amount of confidence that the Fed was always certain to get its money back. Still, this is what lenders of last resort do. And this is what the ECB is most emphatically not doing."
As Salmon's comment makes clear, he does not view these risks as a repudiation of the appropriateness of what the Fed did during the crisis. And if I read Brad DeLong correctly, his main complaint is not about the programs per se, but on the pricing of the support provided to banks:
"When you contribute equity capital, and when things turn out well, you deserve an equity return. When you don't take equity—when you accept the risks but give the return to somebody else—you aren't acting as a good agent for your principals, the taxpayers.
"Thus I do not understand why officials from the Fed and the Treasury keep telling me that the U.S. couldn't or shouldn't have profited immensely from its TARP and other loans to banks. Somebody owns that equity value right now. It's not the government. But when the chips were down it was the government that bore the risk. That's what a lender of last resort does."
I wish that we could stop commingling TARP and the Fed's liquidity programs. At the very least, the legal authorities for the programs were completely distinct, and the Federal Reserve did not have any direct authority for the implementation of the TARP program. But that is probably beside the point for the current discussion. What is germane is the observation that the TARP funds did come with equity warrants issued to the Treasury. So in that case, there was the equity stake that DeLong urges.
As for the Fed programs, here again is a response taken from Fed FAQs:
"As verified by our independent auditors, the Federal Reserve did not incur any losses in connection with its lending programs. In fact, the Federal Reserve has generated very substantial net income since 2007 that has been remitted to the U.S. Treasury."
This observation does not, of course, repudiate Felix Salmon's point that losses may have been incurred, or the DeLong argument that the rates paid for loans from the Fed were not high enough by some metric. Nor should turning a profit be seen as proof that lending policies were sound (just as incurring losses would not be proof that the policies were foolhardy). But doesn't the record at least provide some support for a case that the Fed used reasonable judgment with respect to its lending decisions and acted as prudent steward of taxpayer funds even as it took extraordinary measures to address the worst financial crisis since the Great Depression?
In fact, the main point raised by Felix Salmon is not that risks were taken, but that those risks were not communicated in a transparent way:
"And it's frankly ridiculous that it's taken this long for this information to be made public. We're now fully ten months past the point at which the Financial Crisis Inquiry Commission's final report was published; this data would have been extremely useful to them and to all of the rest of us trying to get a grip on what was going on at the height of the crisis. The Fed's argument against publishing the data was that it 'would create a stigma,' and make it less likely that banks would tap similar facilities in future. But I can assure you that at the height of the crisis, the last thing on Morgan Stanley's mind was the worry that its borrowings might be made public three years later. When you need the money, and the Fed is throwing its windows wide open, you don't look that kind of gift horse in the mouth."
One thing I wish to continually stress is that we should be clear about what Bloomberg refers to as "secret loans." One last time from the Fed FAQs:
"All of the Federal Reserve's lending programs were announced prior to implementation and the amounts of support provided were easily tracked in weekly and monthly reports on the Federal Reserve Board's website."
So the missing information was not about the sums of money being lent but the exact details of who was receiving those loans. In most cases, these loans were not targeted to specific institutions, but obtained from open funding facilities such as the Term Auction Facility. And, though you can argue the point, stigma was a real concern, as Chairman Bernanke has testified:
"Many banks, however, were evidently concerned that if they borrowed from the discount window, and that fact somehow became known to market participants, they would be perceived as weak and, consequently, might come under further pressure from creditors. To address this so-called stigma problem, the Federal Reserve created a new discount window program, the Term Auction Facility (TAF). Under the TAF, the Federal Reserve has regularly auctioned large blocks of credit to depository institutions. For various reasons, including the competitive format of the auctions, the TAF has not suffered the stigma of conventional discount window lending and has proved effective for injecting liquidity into the financial system."
Salmon argues that this resolution to the stigma problem would not have been weakened by the current rules that require reporting the lending specifics with a lag. It is a reasonable argument (in what is, as an aside, a balanced and reasoned article by Salmon), and reasonable people can disagree. In any event, lagged reporting of details on the recipients of Fed loans is now the law. As a consequence, if such liquidity programs are needed again we can only hope that Felix Salmon's beliefs turn out to be true.
UPDATE: The Board of Governors has posted a response to recent reports on the Federal Reserve's lending policies.
By Dave Altig, senior vice president and research director at the Atlanta Fed
TrackBack URL for this entry:
Listed below are links to blogs that reference The ongoing lender of last resort debate:
- Do Higher Wages Mean Higher Standards of Living?
- Is There a Taylor Rule for All Seasons?
- Faster Wage Growth for the Lowest-Paid Workers
- Is Job Switching on the Decline?
- Private and Central Bank Digital Currencies
- New Evidence Points to Mounting Trade Policy Effects on U.S. Business Activity
- Digging into Older Americans’ Flat Participation Rate
- What the Wage Growth of Hourly Workers Is Telling Us
- Making Analysis of the Current Population Survey Easier
- Mapping the Financial Frontier at the Financial Markets Conference
- January 2020
- December 2019
- November 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- March 2019
- February 2019
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Economic conditions
- Economic Growth and Development
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Health Care
- Inflation Expectations
- Interest Rates
- Labor Markets
- Latin America/South America
- Monetary Policy
- Money Markets
- Real Estate
- Saving, Capital, and Investment
- Small Business
- Social Security
- This, That, and the Other
- Trade Deficit
- Wage Growth