About


The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.


« November 2017 | Main | February 2018 »

January 18, 2018


How Low Is the Unemployment Rate, Really?

In 2017, the unemployment rate averaged 4.4 percent. That's quite low on a historical basis. In fact, it's the lowest level since 2000, when unemployment averaged 4.0 percent. But does that mean that the labor market is only 0.4 percentage points away from being as strong as it was in 2000? Probably not. Let's talk about why.

As observed by economist George Perry in 1970, although movement in the aggregate unemployment rate is mostly the result of changes in unemployment rates within demographic groups, demographic shifts can also change the overall unemployment rate even if unemployment within demographic groups has not changed. Adjusting for demographic changes makes for a better apples-to-apples comparison of unemployment today with past rates.

Three large demographic shifts underway since the early 2000s are the rise in the average age and educational attainment of the labor force, and the decline in the share who are white and non-Hispanic. These changes are potentially important because older workers and those with more education have lower rates of unemployment across age and education groups respectively, and white non-Hispanics tend to have lower rates of unemployment than other ethnicities.

The following chart shows the results of a demographic adjustment that jointly controls for year-to-year changes in two sex, three education, four race/ethnicity, and six age labor force groups, (see here for more details). Relative to the year 2000, the unemployment rate in 2017 is about 0.6 percentage points lower than it would have been otherwise simply because the demographic composition of the labor force has changed (depicted by the blue line in the chart).

In other words, even though the 2017 unemployment rate is only 0.4 percentage points higher than in 2000, the demographically adjusted unemployment rate (the green line in the chart) is 1.0 percentage points higher. In terms of unemployment, after adjusting for changes in the composition of the labor force, we are not as close to the 2000 level as you might have thought.

The demographic discrepancy is even larger for the broader U6 measure of unemployment, which includes marginally attached and involuntarily part-time workers. The 2017 demographically adjusted U6 rate is 2.5 percentage points higher than in 2000, whereas the unadjusted U6 rate is only 1.5 percentage points higher. That is, on a demographically adjusted basis, the economy had an even larger share of marginally attached and involuntarily part-time workers in 2017 than in 2000.

The point here is that when comparing unemployment rates over long periods, it's advisable to use a measure that is reasonably insulated from demographic changes. However, you should also keep in mind that demographics are only one of several factors that can cause fluctuation. Changes in labor market and social policies, the mix of industries, as well as changes in the technology of how people find work can also result in changes to how labor markets function. This is one reason why estimates of the so-called natural rate of unemployment are quite uncertain and subject to revision. For example, participants at the December 2012 Federal Open Market Committee meeting had estimates for the unemployment rate that would prevail over the longer run ranging from 5.2 to 6.0 percent. At the December 2017 meeting, the range of estimates was almost a whole percentage point lower at 4.3 to 5.0 percent.

January 18, 2018 in Business Cycles, Economic conditions, Labor Markets, Unemployment | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

January 17, 2018


What Businesses Said about Tax Reform

Many folks are wondering what impact the Tax Cuts and Jobs Act—which was introduced in the House on November 2, 2017, and signed into law a few days before Christmas—will have on the U.S. economy. Well, in a recent speech, Atlanta Fed president Raphael Bostic had this to say: "I'm marking in a positive, but modest, boost to my near-term GDP [gross domestic product] growth profile for the coming year."

Why the measured approach? That might be our fault. As part of President Bostic's research team, we've been curious about the potential impact of this legislation for a while now, especially on how firms were responding to expected policy changes. Back in November 2016 (the week of the election, actually), we started asking firms in our Sixth District Business Inflation Expectations (BIE) survey how optimistic they were (on a 0–100 scale) about the prospects for the U.S. economy and their own firm's financial prospects. We've repeated this special question in three subsequent surveys. For a cleaner, apples-to-apples approach, the charts below show only the results for firms that responded in each survey (though the overall picture is very similar).

As the charts show, firms have become more optimistic about the prospects for the U.S. economy since November 2016, but not since February 2017, and we didn't detect much of a difference in December 2017, after the details of the tax plan became clearer. But optimism is a vague concept and may not necessarily translate into actions that firms could take that would boost overall GDP—namely, increasing capital investment and hiring.

In November, we had two surveys in the field—our BIE survey (undertaken at the beginning of the month) and a national survey conducted jointly by the Atlanta Fed, Nick Bloom of Stanford University, and Steven Davis of the University of Chicago. (That survey was in the field November 13–24.) In both of these surveys, we asked firms how the pending legislation would affect their capital expenditure plans for 2018. In the BIE survey, we also asked how tax reform would affect hiring plans.

The upshot? The typical firm isn't planning on a whole lot of additional capital spending or hiring.

In our national survey, roughly two-thirds of respondents indicated that the tax reform hasn't enticed them into changing their investment plans for 2018, as the following chart shows.

The chart below also makes apparent that small firms (fewer than 100 employees) are more likely to significantly ramp up capital investment in 2018 than midsize and larger firms.

For our regional BIE survey, the capital investment results were similar (you can see them here). And as for hiring, the typical firm doesn't appear to be changing its plans. Interestingly, here too, smaller firms were more likely to say they'd ramp up hiring. Among larger firms (more than 100 employees), nearly 70 percent indicated that they'd leave their hiring plans unchanged.

One interpretation of these survey results is that the potential for a sharp acceleration in GDP growth is limited. And that's also how President Bostic described things in his January 8 speech: "For now, I am treating a more substantial breakout of tax-reform-related growth as an upside risk to my outlook."



January 17, 2018 in Business Cycles, Data Releases, Economic conditions, Economic Growth and Development, Economics, Taxes | Permalink

Comments

A key question is how will you allocate the funds from the lower tax. Many are giving their employees bonuses and raises. If this trend continues it will put more money into circulation which should result in improving the economy of the US. Do you agree?

Posted by: Sebastian Carta Jr. | January 19, 2018 at 05:18 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

January 04, 2018


Financial Regulation: Fit for New Technologies?

In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.

Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.

As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper  I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.

Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.

This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.

John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.

On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper  with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.

Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.

Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper  with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.

It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.

January 4, 2018 in Banking, Financial System, Regulation | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

January 03, 2018


Is Macroprudential Supervision Ready for the Future?

Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.

In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.

Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.

The question of how to determine when to deploy a macroprudential tool is the subject of a paper  by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.

Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.

Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.

The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper  by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.

Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper  he coauthored looking at the recent effectiveness of two such tools in Ireland.

In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.

Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.

January 3, 2018 in Banking, Financial System, Regulation | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


Archives


Categories


Powered by TypePad