Skip Header

Federal Deposit
Insurance Corporation

Each depositor insured to at least $250,000 per insured bank

Home > Industry Analysis > Research & Analysis > FDIC Banking Review

FDIC Banking Review

Differentiating Among Critically Undercapitalized Banks and Thrifts*
by Lynn Shibut, Tim Critchfield, and Sarah Bohn**

The Prompt Corrective Action (PCA) provisions1 in Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) require that regulators set a threshold for critically undercapitalized institutions, and that regulators promptly close institutions that breach the threshold unless they quickly recapitalize or merge with a healthier institution.  Many economists expected these provisions to result in dramatically reduced loss rates, or even zero loss rates, for bank failures.

Bank regulators set the threshold for critically undercapitalized institutions to 2 percent tangible capital.  There are a number of reasons why a threshold above 0 percent is appropriate.  Since the value of many types of bank assets is opaque and difficult to estimate, and since troubled banks have an incentive to overstate asset values, it is not unusual for the capital levels of troubled banks to be overstated.1  Thus a threshold slightly above 0 percent may better approximate insolvency on a market-value basis.  In addition, a higher threshold may increase the likelihood that a private-sector solution can be found for a failing institution.

Critics have complained that the 2 percent capital threshold set by regulators is too low.  For example, Benston and Kaufman (1997, p. 154) argued that it appears to be “much too low” and should be increased, citing as evidence the likelihood that most banks with 2 percent tangible capital already have negative market-value capital, the ability of troubled banks to change risk exposure quickly by using derivatives, and the loss rates of post-
FDICIA failures.2 

Setting the thresholds involves making trade-offs.  Peek and Rosengren (1996, p. 50) summarized them as follows:

It is easy to identify a problem bank at the time of its failure.  The challenge is to identify a problem bank in time to prevent its failure or at least in time to alter its behavior in order to limit the losses to the deposit insurance fund.  Thus an appropriate slogan for early intervention might be “the earlier the better.”  However, such an approach must be tempered by giving appropriate weight to the costs associated with supervisory intervention in banks that are incorrectly identified as “troubled.”

In this article, we studied institutions insured by the Federal Deposit Insurance Corporation (FDIC) that crossed the 2 percent tangible capital threshold or failed between 1994 and 2000.  We separated these banks3 into four groups (low-cost failures, high-cost failures, near-failures that survived, and near-failures that were purchased), and we analyzed differences among the groups.  If there are consistent differences that separate failed banks (and particularly high-cost failed banks) from other seriously troubled banks, there may be opportunities to improve the regulatory treatment of troubled banks—either through a change in the PCA threshold for a critically undercapitalized bank or by other means.

This article begins by providing background information, including a discussion of related literature and the tradeoffs associated with setting the threshold for critically undercapitalized banks.  The article then discusses the data and methodology and reports the results of various comparisons across groups.  The final sections provide concluding remarks and make recommendations.

Background and Literature Review

The PCA provisions in FDICIA were motivated by a desire to reduce supervisory forbearance and failure costs in the banking industry.  Many people, including members of Congress, believed that regulators should have supervised banks and thrifts differently in the 1980s.  Appendix 1 provides a summary of these provisions.

Carnell (1997b) concisely described the overarching goal of PCA:  “to resolve the problems of insured depository institutions at the least-possible long-term loss to the deposit insurance fund (i.e., to avoid or minimize loss to the fund).”  The means for achieving the goal center on incentives.  For banks, PCA was designed to reduce the “moral hazard” inherent in federal deposit insurance by giving the owners and managers of troubled banks an incentive to avoid taking excessive risks by encouraging them to maintain enough capital and by limiting their discretion if capital is impaired.  For regulators, PCA was designed to encourage aggressive action against troubled banks by limiting their ability to practice forbearance and by requiring audits after failures.  PCA also clarified the rules of the game for both bankers and regulators.

Technically, the goal of PCA can be accomplished by reducing either the loss rate of failed banks or the failure rate of banks (or both).  The limits triggered by the thresholds for an undercapitalized bank focus largely on avoiding failure and thus reducing the failure rate.  In contrast, the closure rules triggered by the threshold for a critically undercapitalized bank focus more heavily on reducing the loss rate by ensuring prompt closure of nonviable banks.  But the closure rules could also reduce the failure rate by encouraging banks to seek capital earlier than they would if closure occurred when banks become insolvent on a book- value basis.4

We found no empirical studies that concentrated specifically on the 2 percent PCA threshold.  However, several studies have focused on early intervention and the likelihood that the new thresholds reduced regulatory forbearance.  Peek and Rosengren (1996) studied commercial banks and savings banks in New England from 1988 through 1994, and they found that more than two-thirds of banks downgraded to a CAMEL rating of 4 had a tangible capital ratio indicating they were adequately capitalized under PCA.  Peek and Rosengren concluded that “examiners usually identify problems before PCA guidelines are triggered.”5  The U. S. General Accounting Office (GAO, 1996) came to a similar conclusion in its review of activities from 1992 through 1995.  Peek and Rosengren (1997) also studied the pattern of formal and informal enforcement actions imposed on commercial banks and savings banks in New England from 1989 through 1992.  They found that “formal regulatory actions tend to occur well before most banks become undercapitalized according to PCA capital thresholds, and they include restrictions on bank behavior that tend to be more comprehensive than those in the PCA provisions.”  Jones and King (1995) studied commercial banks operating from 1984 through 1989, and they concluded that the risk-based capital (RBC) thresholds for undercapitalized banks do little to limit supervisory forbearance.6  Thus several authors suggest that the PCA capital thresholds do little to force earlier or stronger intervention by regulators at the stage when the probability of failure is most likely to be reduced by such intervention.7

Two studies investigated changes to the RBC threshold for undercapitalized institutions.  Berger et al. (1991) examined RBC thresholds as part of an analysis of the problems associated with market-value accounting proposals.  Noting that the GAO (1990) had found that some banks in poor condition underreport their loan-loss reserves, they explored several alternative RBC standards that incorporated adjustments to the loan-loss reserve based on nonperforming loan data.  Using Call Report data for all banks from 1982 through 1989, they developed statistical methods for estimating alternative loan-loss reserves.  They used the revised loan-loss reserve figures to adjust the RBC ratios for all banks as of year-end 1989, and compared the adjusted results with the actual RBC ratios.  All of their alternative measures resulted in substantial increases in the number of banks that would have been classified as undercapitalized.8  They concluded that the revision would expand and probably improve the distribution of regulatory scrutiny.

Jones and King (1995) developed an alternative RBC threshold by using data on classified assets and noncurrent loans to enhance the current threshold.  To measure the effectiveness of their alternative, they estimated the prediction error of the actual and revised RBC standard, assuming that all troubled institutions should be classified as undercapitalized under the RBC standard.9  Their revised RBC standard resulted in a significant reduction in the prediction error.

These studies suggest that an investigation of the tradeoffs associated with the 2 percent threshold might bear fruit.  If one can identify the tradeoffs related to the threshold, consider the costs inherent in each, and consider the effects of changing the threshold, one can make a well-informed judgment about the threshold that provides the highest net benefit to society.10  In this article, we are most interested in the potential costs and benefits associated with a marginal change in the 2 percent threshold for critically undercapitalized institutions.11  We examine the consequences of thresholds that identify failure candidates either before or after the optimum time.

We first review the costs associated with the delay of closure.  Theoretically, this includes the operating costs of running an insolvent institution, plus any additional costs associated with risks taken by the banks in hope of surviving.  Gilbert (1992) analyzed commercial banks and concluded that both of these costs may not be very high.  Based on a review of commercial bank failures from 1985 through 1990, he found no statistically significant differences in the loss rates of BIF-member banks that were undercapitalized for different lengths of time before closure.  Thus, whether PCA encourages regulators to close undercapitalized banks earlier (or later), Gilbert predicts that the loss rates do not change significantly.  However, the U.S. savings-and-loan crisis and Japan’s banking crisis provide ample evidence that these costs can be extremely high—particularly if regulators do not carefully monitor undercapitalized banks and limit their activities, or if the delay lasts a long time.

One might argue that the costs of delay can be calculated as the cost of failure (for both the FDIC and other creditors of the receivership), since regulators have the authority to close banks when their market value is zero.12  However, a large part of these costs is probably attributable to the difficulties of measuring the market value of a troubled bank, large shifts in market value, or fraud, rather than the capital threshold used for PCA.13  It is very difficult to separate losses attributable to measurement errors inherent in certain assets or market shifts from losses attributable to a delay in closure caused by a sub-optimal PCA threshold—especially since regulators have the authority to define capital (effectively changing accounting standards) as well as choose the capital threshold.14  The cost associated with fraud may be somewhat easier to ascertain, but one should not automatically assume that there were no delays in closure for bank failures that involved fraud.

Next, we look at the consequences of a PCA threshold that flags failure candidates too soon.  The associated costs differ depending on the outcome of the PCA action.  Possible outcomes include

  • The bank is closed.
  • The bank survives, either with or without a capital infusion.
  • The bank is quickly sold to another institution.

    If the bank is closed, the costs are relatively high.  The owners suffer a loss of the freedom to control their assets and the stigma of failure.  They also bear costs that would have been avoided if the bank had stayed open.15  In some circumstances, the bank’s customers or the local community could suffer.16

    If the bank survives intact, then the costs to the owners may be very low.17  If the bank reaches the 2 percent threshold and the owners sell the bank to another institution, then the costs are probably somewhere between those of the other outcomes.18

    The dynamic effects of the PCA threshold should also be considered.  For some banks, the PCA threshold for critically undercapitalized institutions increases the bank’s resolve to act decisively or accelerates a search for new capital or potential acquirers.  If the PCA threshold triggers a more effective response by the bank, and the improvement in the bank’s response causes the bank to avoid failure, then the PCA threshold would yield a net benefit to the bank (and, most likely, to the FDIC).19

    Under the current PCA threshold, we see little evidence that viable banks have been closed; thus to date the costs of unnecessary closure have probably been negligible or zero.20  It is more difficult to gauge the net cost for the other two outcomes (survival or a quick merger) because they involve predicting the relationship between the threshold level and outcomes of troubled banks, estimating the associated economic effects, and balancing the economic effects against the lost freedom experienced by the bank owners.

    The optimum threshold for critically undercapitalized banks may well vary across the business cycle.  Because banks that breach the PCA threshold have such a short time to recapitalize, it appears likely that the costs imposed by a threshold that was set too high would be heavier during times when the market for bank franchises is weak, and lower during times when markets are strong.21  The cost of lengthy delays can be extremely high, although the cost of brief delays (if accompanied by close supervision) may be low.  In some unusual circumstances, the cost of identifying banks too early could be substantial.  For example, the application of PCA thresholds on money-center banks during the less-developed-country crisis in the early 1980s might have caused serious damage to the economy.22  Thus the GAO (1996, pgs. 56–57) states that “we see the issue as one of striking a proper balance between the need for sufficient regulatory discretion to respond to circumstances at a particular institution and the need for certainty for the banking industry about what constitutes an unsafe and unsound condition and what supervisory actions would be expected to result from those conditions.”  Mishkin (1997) discusses the need for discretion in unexpected situations that involve systemic risk.23  In more normal circumstances, the cost of identifying banks too early would be much lower.

    Few studies have attempted to quantify the net cost or benefit of changing the capital thresholds for closure.  The FDIC (1997, chapter 12) made an admittedly very rough estimate of the benefits if PCA had been in effect from 1980 through 1992 by assuming that a failing bank’s operating losses (excluding loan-loss provisions) would have been avoided if closure had occurred according to the PCA rules.  The authors of the FDIC study found that 343 failed banks would have been closed earlier, yielding a cost savings of about $825 million.  They identified 143 banks that breached the 2 percent PCA threshold but did not fail, but they made no attempt to estimate the associated costs.

    Barakova and Carey (2001) shed some light on these tradeoffs in their analysis of the characteristics of banks that recover from distress and the pace of their recovery.24  Using data from FDIC-insured commercial banks from 1984 through 1999, they identified 345 banks whose equity ratios fell below 2 percent but did not fail.  Only 51 percent of those banks recovered in one year, 71 percent recovered in two years, and 91 percent in four years.25  They found that the typical sources of bank recapitalization differed depending on the recovery time frame.  Equity infusions played a substantial role regardless of the recovery time frame, but for banks that recovered more slowly, the role of equity infusions was smaller and the role of income larger.  They concluded that regulators should insist on the rapid issuance of sufficiently large amounts of capital if they want a rapid recovery of a seriously undercapitalized bank.  This analysis indicates that if an increase in the PCA threshold causes viable banks to be classified as critically undercapitalized, it may well force some banks to merge (or possibly even fail) that would otherwise survive intact.

    Barakova and Carey also found that most distressed banks did not issue new equity until after loan losses began to decline—both before and after FDICIA, perhaps because the cost of equity is prohibitive when significant loan losses are being reported.  This indicates that viable banks that breach the threshold while their loan losses have not clearly declined may be most likely to suffer from the costs associated with a PCA threshold above the optimum level.

    By analyzing groups of critically undercapitalized banks since FDICIA, we provide insight into the effects of the 2 percent threshold to date, and we highlight areas where changes in the threshold could provide a net benefit to society.  Note, however, that the best regulatory change may be something other than a change in the PCA threshold—particularly if the differences between the outcome groups are unrelated to capital.  

    Data and Methodology

    This section discusses the data and methods used for analysis.

    Sample Selection

    For 1994 to 2000, we analyzed the bank failures plus any bank that fell below the 2 percent capital requirement established in FDICIA.  We began in 1994 because the FDICIA provisions aimed at reducing the FDIC’s losses would have been fully implemented by then.  For failed institutions, we segregated those with relatively low resolution costs from other failures.  For each institution that fell below 2 percent tangible capital without failing, we looked to see what happened to the institution after getting into capital trouble.  We found that about one-half of this group was absorbed by other organizations and one-half survived independently. 

    There were 48 failures from 1994 through 2000.  Three notable failures were eliminated from our analysis because of massive fraud or other extraordinary circumstances.26  We labeled 16 failed institutions with a resolution cost of less than 12 percent of assets as low-cost failures.  This left 29 institutions with an estimated resolution cost of over 12 percent of assets.  We selected 12 percent as the cutoff for low-cost failures because it was well below the average loss rate experienced by the FDIC and also provided a reasonable sample size.  We used two different reference periods to analyze the data:  for the 32 institutions that became critically undercapitalized before failure, we used the date when they breached the threshold; for the remaining 13 institutions, we used the final Call Report date.27  For the remainder of the article, the analysis date refers to the date an institution fell below the PCA capital limit (although in fact for a few institutions it refers to the failure date).28

    We found 44 institutions that fell below 2 percent tangible capital from 1994 to 2000 and that did not fail as of the fourth quarter of 2001.  Five institutions in this group were eliminated because of special circumstances.29  The remaining 39 institutions consisted of 21 institutions that were absorbed into another organization within one year of breaching the threshold and 18 institutions that survived intact for at least one year.30  We refer to the total group of 39 as near-failures.

    Pre-PCA Condition Data

    We collected a large volume of data from Call Reports and Thrift Financial Reports (TFR) on all the institutions selected.  We constructed variables from both reports that correspond to consistent measures of financial performance for the two-year period before an institution failed or fell below 2 percent tangible capital; for institutions that survived after reaching the threshold, we also collected data for the year after they fell below 2 percent tangible capital.  We developed annualized performance ratios from a rolling set of the previous four quarters.  This calculation allowed us to create aggregate data from any ending quarter that would contain annual ratios that covered all four seasons, and this tended to smooth the effects of sudden changes in accounting or performance.

    Examination data were collected for each institution from any full examination over the three years immediately before failure or the three years before an institution fell below 2 percent tangible capital.  Classified loans were taken from the last full examination before the institution fell below the PCA capital limit.  These loans were measured against assets reported in the quarter after the examination was completed or, if a failed institution did not file its last report, the previous quarter.31

    To measure local bank distress, we created a problem bank index based on CAMELS 4- and 5-rated banks in each metropolitan statistical area (MSA) or county.  The index represents the percentage of deposits in each bank’s location (or locations) that were held by problem banks.32

    We also used branch-level data to create a
    geographic distribution index, similar to the Herfindahl-Hirschman Index (HHI).  Instead of measuring market concentration, our geographic index measures an institution’s exposure to each market in which it operates (from 0, complete diversification, to 1, no diversification).33

    To test for understated reserve levels, we followed the technique adopted by Jones and King (1995).  Jones and King calculated an adjusted reserve level based on a pooled time-series regression, where charge-offs over a two-year period were regressed against the amount of classified loans as of the beginning of the two-year period.  The resulting equation is as follows:34

    Two-year charge-offs equal 0.006 plus (0.95 times Loss) plus (0.57 times Doubtful) plus (0.15 times Substandard) plus (0.04 times Special Mention)

    Jones and King estimated the appropriate reserves by applying these parameter estimates to the classified assets held by each bank in the sample (implicitly assuming that reserves should cover about two years of charge-offs).35  We compared the total from this calculation with the reserves reported in the financial reports filed in the quarter closest to the examination date.

    Resolution and Receivership Data

    For each failed institution, we collected data about the resolution and the receivership.  The resolution data came from FDIC Board cases and associated working papers; the data included the number of bidders, the premiums for the top two bids, and the FDIC’s estimated loss at the time of failure.  The receivership data came primarily from the FDIC’s general ledger, the FDIC Failed Bank Cost Analysis, and pro forma financial statements prepared at bank closing.  For the most part, we reported results over the life of the receivership; for receiverships that had not terminated as of year-end 2000, results through year-end 2000 were used.36  To improve comparability across banks, we reported many items as a percentage of total assets at failure.  The FDIC general ledger was the source for asset loss rates, assets sold to acquirers, balance sheet composition (including adjustments to the balance sheet made during the receivership), and receivership income and expense items.  The primary source for the FDIC loss rates used in this study was the Failed Bank Cost Analysis, which relies on FDIC general ledger data.  Appendix 2 includes a discussion of the method we used to calculate FDIC loss rates.

    We also collected receivership data from the pro forma statements prepared during the institution closings.  These provided the banks’ balance sheets as of the date of failure, adjustments made to the balance sheet to switch from Generally Accepted Accounting Principles (GAAP) accounting to the cash-basis accounting used by the FDIC’s receiverships, and the initial balance sheet recorded on the FDIC’s general ledger. 

    The accounting differences between ongoing institutions and receiverships are substantial; thus any comparison of pre-failure and post-failure results must be done carefully.  Appendix 2 discusses the major accounting differences, as well as certain adjustments we made to improve the comparability of the data across institutions and across time.

    Statistical Analysis Methods

    With the wealth of data collected for the institutions chosen for this study, we required a statistical test for finding significant differences in measures of central tendency for any two of the institution groupings.  We were limited by the relatively low number of institutions in each group, since the study period was only 1994–2000.  The groups ranged in size from 16 to 45.  In addition, many of the financial ratios in this study tended not to be distributed normally. Although a two-sample t-test is robust against minor normality departures, this study’s data contain extreme outliers and are highly skewed.  Thus, the reliability of a simple parametric test is suspect.37  In this study, a nonparametric analysis appears more appropriate since it does not involve assumptions of normality and is not sensitive to the presence of outliers.

    We used the Wilcoxon rank sum test to compare the location of two given distributions with the same general shape, which is essentially a difference of medians test.  Given a financial or other ratio and two groups of institutions, the Wilcoxon statistic tests the null hypothesis that the two groups have the same probability distribution versus the alternative hypothesis of a location shift (that is, a difference in medians).  The test provides a p-value for use in determining the conclusion of the test, given a level of significance.  We are particularly interested in financial ratios for the compared groups that are statistically different at the 1 percent, 5 percent, and 10 percent significance levels.

    In addition to the Wilcoxon test, we applied the Kendall’s tau test (test of independence) to the failed bank data.  As explained above, failures were split into two groups—high-cost and low-cost—for the purpose of refining the differentiation between undercapitalized institutions.  Dividing the failures into the two groups involved an educated but somewhat arbitrary selection of a loss rate threshold.  Although the group-by-group analysis is valuable especially for comparing failures with near-failures and low-cost failures with near-failures, we also chose to study the correlation of failure loss rate with other variables for the entire set of institutions in the study.  The Kendall’s tau test involves computing a correlation statistic based on paired-sign statistics.  Like the Wilcoxon test, this non-parametric test is robust to outliers and does not involve assumptions of normality.  The test provides a sample correlation coefficient (ranging from –1 to 1) and a p-value testing the null hypothesis of independence of two variables versus the alternative hypothesis that the two variables are correlated. 


    We begin with a comparison of failures and near-failures.  This is followed by comparisons of low-cost and high-cost failures, low-cost failures and near-failures, near-failures by outcome (survived versus purchased), and surviving near-failures over time (PCA date versus one year later).  Most of the comparisons were based on the PCA date, defined as the quarter-end date when the institution breached the 2 percent threshold; for failed institutions that never breached the threshold, the final Call Report date was treated as the PCA date.

    Failures versus Near-failures

    This section discusses failures compared with near-failures that breached the PCA threshold.  Table 1 provides the comparison of failures and near-failures as of the quarter that the institutions breached the PCA threshold; table 2 provides the same comparison one year earlier.  The 45 failures in our study held $4.1 billion in assets as of the PCA date, and the near-failures held $9.0 billion in assets at the PCA date.  The average size for the failures was just $92 million, while the average size for the near-failures was more than twice as large ($230 million).  However, the difference in the median size was much smaller and statistically insignificant.

    Table 1

    As of the last examination before failure, 93 percent of the failures and 85 percent of the near-failures had a CAMELS rating of 4 or 5.  None of the institutions had a CAMELS rating of 1, but six institutions were rated 2; only one of the six failed.

    The median ratios calculated for condition and performance of each group revealed a few differences but for many items there was no meaningful difference.  At the quarter when the failures fell below the PCA threshold, their condition was predictably worse than that of the near-failures in most respects except capital.  The failures had higher median equity capital ratios than the near-failures (1.39 compared with 0.97 percent), and statistically significant higher core capital (leverage) ratios (1.28 versus 0.87 percent).  The 13 failures that had capital above the PCA threshold appear to have lifted the core capital ratio for the entire group of failed institutions, which would explain the significant differences in capital ratios.38

    For both groups, median capital declined precipitously—over 400 basis points—during the year before they breached the PCA threshold or failed.  This provides evidence that a modest increase in the PCA threshold may not result in a substantial change in the timing of the associated regulatory actions for institutions that breach the threshold.

    Table 2

    The portfolio composition measures showed median values that were generally worse for the institutions that eventually failed, but the differences in medians were statistically insignificant.  On the balance sheet, the failed institutions held a median of just 9 percent of assets in securities while the near-failures held 12 percent.  For the failed institutions this ratio held at 9 percent for the year before they fell below the PCA threshold, but the near-failures actually increased their median value for the percentage of assets held in securities from 10 percent in the year before falling below the threshold.  The composition of the loan portfolios for the failed institutions appeared slightly more risky than for the near-failures, but again none of these differences were significant.  The median value for the percentage of assets held in 1- to 4-family mortgages was 11 percent for the failed institutions while the near-failures held 16 percent of this typically less risky loan.  For commercial real estate (excluding multifamily residential property), a generally riskier loan type that caused many failures in the 1980s, the median value was 16 percent for the failures while the near-failures held just 11 percent.  For the failed in­ stitutions, the 16 percent actually constituted an increase in the median value for commercial real estate (from 14 percent) during the year before falling below the PCA threshold.  This increase may relate to the higher median shrinkage in total assets experienced by the failures in the year preceding the PCA date (12.61 percent versus 8.52 percent), as they sold off more marketable assets to improve their capital ratios.

    The patterns in troubled loans showed more pronounced differences between the two groups.  The level of nonperforming assets was significantly higher for failed institutions than for near-failures.  For failed institutions, the median value for the noncurrent loans plus other real estate owned (OREO) as a percentage of assets was 9.93 percent at the PCA date.  The near-failures reported a median level of 6.69 percent of assets, which was 3.24 percent less than the failed institutions reported.  One year earlier the failures registered a median of 8.56 percent while the near-failures were at 4.62 percent, but the difference was not statistically significant.  The source of higher nonperforming assets was real estate loans:  at the PCA date, the median ratio of noncurrent real estate loans to total real estate loans was 7.97 percent for failures—significantly higher than the 3.54 percent median value for near-failures.  Moreover, the failures held significantly more OREO (2.47 percent of assets for the failures versus 0.80 percent for the near-failures, as of the PCA date).  For commercial and industrial (C&I) loans the differences were not as dramatic nor were they significant, but the problems in these loans arose earlier for the failures.  The failed institutions increased to a median noncurrent rate of 8.73 percent for C&I loans when they hit the PCA threshold from 5.46 percent one year earlier.  The near-failures experienced a sharper change, from 1.70 percent one year before hitting the PCA threshold to a median noncurrent rate of 7.23 percent at the PCA date.  However, in neither period was the median growth in noncurrent loans significantly different between the groups.

    The differences in nonperforming assets across the groups were not accompanied by significant differences in loan provisions and charge-offs.  The charge-off rates were very similar for the two groups, with the near-failures recording slightly higher rates than the failures in both years.  In the last four quarters the median net charge-off rate on loans more than doubled from one year earlier for both groups.  Loan-loss provisions (as a percentage of average assets) showed a similar increase over time.  In both periods, the ratio of loss provisions to average loans was also comparable across groups.  The ratio of the loss allowance to total loans was higher for the failures one year before the PCA date (2.62 percent versus 2.07 percent).  As of the PCA date, the ratio had increased substantially for both groups, but the increase was larger for near-failures.  Thus at the PCA date, the ratio of loss allowance to total loans was 4.37 percent for failures and 4.50 percent for near-failures.

    At the PCA date, the relationship between reserves and noncurrent loans was significantly weaker for the failed institutions than for the near-failures.  The coverage ratio, by measuring the dollar amount of reserves set aside for each dollar of noncurrent loans, gives a relative measure of the protection available for charge-offs before earnings and capital must suffer.  The median coverage ratio was slightly higher for the failures than the near-failures one year before the PCA date (44.74 percent versus 43.47 percent).  While the near-failures improved their ratios in the following year (to 63.98 percent), the coverage ratio for the median failure deteriorated to 43.00 percent as of the PCA date.  Although the near-failures’ capital levels were slightly lower, their reserve levels were much better than those of the institutions that eventually failed.  Figure 1 shows the distribution of coverage ratios across the groups as of the PCA date:  more than one-half of the high-cost failures had coverage ratios below 40 percent, but less than 30 percent of the low-cost failures and near-failures had coverage ratios below 40 percent.

    The failures had a higher level of classified assets (13.00 percent versus 10.47 percent), but the difference was smaller than that of the nonperforming assets and was insignificant.  Surprisingly, the Jones and King estimate of reserves was lower than actual reserves for the failures but higher than actual reserves for the near-failures; however, the difference between the groups was insignificant.  Two phenomena might explain this result.  First, when examiners find that a bank is insolvent, they stop identifying losses and instead shift to preparing for failure.  Therefore the classified asset figures are probably incomplete (and thus understated) for some, perhaps most, of the failures.  Second, the formula relied on examination data from the last full exam before an institution fell below the PCA threshold.  Because we had to reach back in time to get the data for this approach, the formula would not have captured changes to the balance sheet during the intervening period.  Since many of the institutions experienced increases in noncurrent loans during the year before they breached the PCA threshold, and the institutions may also have experienced deterioration in classification levels,39 we think that the classified loan data were too distant from the date these institutions fell below the PCA threshold.

    Comparing capital plus reserves as a percentage of assets when these institutions fell below the PCA limit, the differences between the failed institutions and the near-failures virtually disappear.  The median ratio of equity capital plus reserves to assets was 4.22 percent for the failed institutions while the near-failures were not significantly different at 4.03 percent.  Even one year before, the failures had a median ratio of 8.14 percent while the near-failures had a median ratio of 7.77 percent.  Thus, when noncurrent loan levels are not considered, the difference between the groups was insignificant.

    Figure 1

    The performance up to the quarter when all these institutions fell below the PCA threshold was remarkably similar.  The return on assets (ROA) for the four quarters leading up to the violations of the PCA threshold was poor for both groups.  The failures had a median ROA of –5.62 percent, which was slightly lower than the median loss for the near-failures, at –5.76 percent.  Compared with the near-failures, the failures had somewhat lower earnings the year before (–1.20 percent ROA for failures, –0.72 percent ROA for near-failures).  The failures began reporting quarterly losses in their last seven quarters before falling below the PCA threshold.  The near-failures showed median quarterly losses for their last five quarters before falling below the PCA threshold.40

    Some performance ratios showed significant differences.  The failed institutions generated a median ratio of noninterest expenses to earning assets of 8.19 percent in their last four quarters, up from 6.10 percent one year earlier.  The near-failures generated a noninterest expense ratio of just 6.66 percent, up from 6.21 percent one year earlier.  Although the two groups started at nearly the same level of noninterest expenses, the failing institutions generated more losses from noninterest expenses than the near-failures.  This may relate to the higher levels of troubled assets, since asset workouts are resource intensive.

    The only statistically significant difference in performance between the groups in the year before these institutions fell below the PCA threshold was net interest margin.  The failing institutions reported a net interest margin of 4.93 percent, while the near-failures reported 4.68 percent, a statistically significant 25 basis point difference.  The yield on assets was a statistically insignificant 30 basis points higher for the failures and the cost of funding earning assets was a statistically insignificant 10 basis points lower for the failures.  Although it is difficult to discern from the financial reports, the net-interest-margin advantage enjoyed by the failures may have been from interest accruals on loans that became past due as these institutions came closer to failing.

    In summary, the most important differences between failures and near-failures related to reserve levels and nonperforming assets.   On balance sheet composition, capital, and performance, these two groups were not very different.

    Low-cost Failures versus High-cost Failures

    A comparison of low-cost and high-cost failures gives insight into whether the current capital threshold provides the optimal intervention timing for banks and regulators, and whether additional items might assist regulators to reduce losses to the insurance funds.

    Comparisons of Pre-failure Financial Condition and Performance

    Table 3 compares low-cost and high-cost failures as of the PCA date, and table 4 makes the same comparison one year earlier.  Table 5 provides Kendall’s tau correlations for the FDIC loss rate for both periods.41  There were 16 low-cost failures, holding $1.4 billion in assets as of the PCA date.42  There were 29 high-cost failures, holding $2.7 billion in assets, as of the PCA date.  There were no significant differences related to asset size.

    As expected, most of the median condition ratios were worse for the high-cost failures than for the low-cost-failures.  The median capital ratio of the low-cost failures was 2.09 percent, down from 6.68 percent one year before the institution fell below the PCA threshold.  The median for the high-cost failures was 0.78 percent, down from 5.65 percent one year earlier.  The median capital held by low-cost failures was significantly higher than for the high-cost failures when they fell below the PCA threshold, but the difference one year earlier was insignificant.  Similarly, the median for the core capital (leverage) ratio was 2.03 percent for the low-cost failures and 0.98 percent for the high-cost failures (significantly different).  However, the correlations between the capital measures and the FDIC loss rate were insignificant.

    A related difference appeared in the level of deposits funding assets.  The low-cost failures ended up with deposits that were 95 percent of assets, while the high-cost failures ended up with deposits as 97 percent of assets.  This significant difference may have been caused simply by the lower level of capital backing the assets of the high-cost failures.  

    The low-cost failures had a higher proportion of low-risk assets than the high-cost failures.  The low-cost failures had a median ratio of securities to assets of 10 percent, while the high-cost failures had 9 percent.  The low-cost failures had a median 15 percent in securities one year earlier, while the high-cost failures had even less—8 percent.  In both periods, there was a statistically significant correlation between securities holdings and FDIC losses (–0.27 as of the PCA date; –0.28 one year earlier).  The median for single-family residential mortgages was 16 percent for the low-cost failures, and 9 percent for the high-cost failures.  The year before falling below the PCA threshold both groups held just slightly more: 17 percent and 10 percent, respectively.  In both periods the difference in medians and the correlations with the FDIC loss rate were significant.  The two groups showed a slight insignificant difference in commercial real estate (excluding multifamily residential properties) as a percentage of assets.  The low-cost failures had a median 16 percent of assets in commercial real estate, while the high-cost failures had 18 percent; both medians had increased by 2 percent from the year before.  These mortgages are much more difficult to dispose of when an institution tries to shrink as capital becomes scarce. 

    Table 3

    Nonperforming assets, including noncurrent loans and OREO, were dramatically different for the two groups of failures.  The median percentage of nonperforming assets to total assets for the low-cost failures was 6.44 percent, up from 4.32 percent one year earlier.  The median percentage of nonperforming assets for the high-cost failures was 12.92 percent, up from 9.79 percent one year earlier.  The high-cost failures started at a higher level and remained about double the median rate of the low-cost failures.  There was a 0.33 correlation between nonperforming assets and the FDIC loss rate as of the PCA date, and a 0.26 correlation one year earlier.  The differences in medians and the correlations were statistically significant in both periods. 

    Table 4

    A primary cause of these differences was the level of noncurrent commercial real estate, which was significantly higher for the high-cost failures one year before they fell below the PCA limit.  The median low-cost failure had no noncurrent commercial real estate, while the high-cost failures had a median noncurrent rate for commercial real estate of 3.93 percent.  Commercial real estate probably influenced the OREO levels, which were significantly higher for the high-cost failures in both periods.  As of the PCA date, the high-cost failures had a median 4.17 percent of assets in OREO, whereas the low-cost failures had only 0.46 percent.  Total noncurrent loans exhibited a pattern similar to all nonperforming assets, except that in the year before the institutions fell below the PCA threshold the difference was insignificant.  At the PCA date, the median level of noncurrent loans as a percentage of total loans was 5.67 percent for the low-cost failures, and 11.37 percent for the high-cost failures.  The median net charge-off rate on loans during the last four quarters of operation was 3.85 percent for the low-cost failures, and 3.09 percent for the high-cost failures—not a significant different.  Despite similar net charge-offs, the high-cost failures left behind higher levels of noncurrent loans.

    At failure, reserves and capital serve to cover losses in the loan portfolio.  In this respect, the groups looked similar.  The median ratio of equity capital plus reserves to assets was 5.79 percent for the low-cost failures, and a slightly lower 3.95 percent for the high-cost failures (an insignificant difference).  One year earlier these ratios were within 5 basis points, at 8.19 percent and 8.14 percent, respectively.  However, the high-cost failures needed more reserves to cover their higher level of noncurrent loans.  The coverage ratio, reserves to noncurrent loans, was much lower for the high-cost failures.  The median coverage ratio for the low-cost failures was 69.03 percent, while the high-cost failures had just 37.85 percent—a statistically significant difference of 31.18 percent.  One year earlier, the low-cost failures had a less favorable median of 58.67 percent, while the high-cost failures recorded virtually the same level, 37.83 percent; the difference between groups was insignificant.  Surprisingly, the correlation between the coverage ratio and the FDIC loss rate was insignificant as of the PCA date.  One year earlier, it was –0.19 and significant at the 10 percent level.

    Table 5

    During the year leading up to the PCA date, the correlation between ROA and the FDIC loss rate was –0.26 (negative as expected) and significant; it was insignificant one year earlier.  The low-cost failures had a median loss of 5.23 percent of assets in the year leading up to the PCA date, while the high-cost failures had a loss of 5.92 percent of assets.  The difference in medians was relatively small (0.69) but significant at the 10 percent level.  The net interest margin and the yield on earning assets favored the high-cost failures in both periods, and some of the differences were significant.  One possible explanation for this result is that the high-cost institutions might have accrued more interest that was never subsequently collected, but this is hard to determine from the data we collected.  The low-cost failures had a lower cost of funds in both periods:  the difference in medians was 47 basis points (3.69 percent versus 4.16 percent) and significant in the institutions’ last year of operations before falling below the PCA limit.  During that period, the median ratio of noninterest income to earning assets favored the high-cost failures by 66 basis points but was statistically insignificant.   The median ratio of noninterest expenses to earning assets favored the low-cost failures by 136 basis points.  Although the difference in medians was insignificant, the correlation was 0.21 and significant.

    The low-cost failures operated in markets with much lower levels of problem banks than the high-cost failures.  We measured the level of deposits held by problem banks from each market that the failing institutions operated in during the month of June before they fell below the PCA limit.  The low-cost failures registered a median index of just 0.01, while the high-cost failures showed an index of 0.11 for their markets.  This difference was significant and indicates that the markets in which they operated were troubled enough to stress other institutions as well as their own resources.43  This relates to an important characteristic of the sample.  While the near-failures and low-cost failures were more or less evenly distributed over the sample period, most of the high-cost failures occurred in 1994 and 1995.  Figure 2 demonstrates this phenomenon. 

    Figure 2

    Therefore, some of the differences between the high-cost failures and the other institutions were probably related to the economy, industry conditions, and/or regime changes rather than to the characteristics of the banks.  As confirmed by the problem bank index, many of the high-cost failures occurred when a substantive number of nearby banks were experiencing difficulties and some markets (notably commercial real estate, particularly in New England) were still suffering from excess supply.  When our sample period began (January 1994), over 10 percent of BIF member banks had CAMEL ratings of 3, 4, or 5 (indicating that the banks had significant problems); by year-end 1995, only 5 percent of BIF members had CAMEL ratings of 3, 4, or 5. 

    These results indicate that failures may well be more expensive during periods of stress—regardless of PCA.44  Periods of stress are often characterized by large, and sometimes sudden, shifts in market values.  For example, fixed-rate mortgage loan values plummeted in the late 1970s and early 1980s, farm prices fell sharply during the agricultural crisis in the mid-1980s, and commercial real estate prices plunged in Texas and New England during regional recessions.  Some of the high-cost failures may have become market-value insolvent before the markets bottomed out.45  Because markets are often thin during periods of stress, asset valuation becomes more difficult as well.  Therefore, it may not be feasible or desirable to create a regulatory regime that results in near-zero losses to the insurance funds during periods of stress.46

    We briefly looked at the relationship between coverage ratios and industry stress.  Coverage ratios were not correlated with the problem bank index.  As shown in figure 3, coverage ratios varied widely over the sample (both for failed banks and near-failures) throughout the sample period. 

    In addition, some of the costs experienced by the high-cost failures might be attributable to changes in the regulatory regime.  Most banks that crossed the 2 percent PCA threshold after 1995 would have experienced their entire period of decline after FDICIA had been passed and the enabling regulations were in place.  Those banks that crossed the threshold in 1994 (and perhaps also in 1995) might have had a less rigorous incentive structure in place during the early phases of their decline.   

    Figure 3

    We also investigated the level of classified loans in each group based on the last full examination before falling below the PCA limit, but none of our measures showed a significant difference between the groups.  The median low-cost failure had 12.20 percent of its loans classified by examiners during their last full examination, while the median high-cost failure had 13.77 percent.  Based on classified loans, we estimated appropriate levels of reserves and compared these with the actual reserves reported.  The median low-cost failure reported reserves that were 114 percent of the estimated level, while the median high-cost failure reported reserves of 97 percent of the estimated level.  This difference was not significant, but this does support the higher level of reserves to noncurrent loans held by the low-cost failures.

    As in the comparison between failures and near-failures, the coverage ratio and the level of nonperforming assets were important characteristics that distinguished between low-cost and high-cost failures.  These items appear to be the most fruitful ones to consider for any policy changes that might reduce loss rates for failed banks.  There were also important differences related to industry stress and the timing of the failures.  We found evidence that failure costs were influenced by industry conditions.  However, changes in the regulatory treatment of seriously troubled banks may not influence the level of industry stress.  Some differences in asset composition and performance were also statistically significant and may prove useful. 

    Comparisons at Resolution

    We examined the number of institutions that submitted bids at failure.  The median number of bidders was four for both the low-cost and high-cost failures, and the distribution around the median was similar.  Thus, it appears that the market exposure was sufficient for both groups.47  The bid results (shown in tables 6 and 7) were somewhat different.  As anticipated, low-cost failures generally had deposit franchises that were worth more than those of high-cost failures.  At 2.88 percent, the median bid-to-deposit ratio for low-cost failures was 51 percent higher than the median for high-cost failures.  Although the differences in medians were insignificant, we found a relatively strong (–0.22) and significant correlation between this ratio and the FDIC loss rate.  Even so, the differences explain only a relatively small portion of the overall cost differences.

    Comparisons of Receivership Performance

    Table 6 provides comparisons of receivership performance and table 7 provides correlation statistics to loss rates.  As anticipated, there was a strong relationship between asset loss rates and the FDIC loss rate.  The asset type with the largest losses was OREO: even the low-cost failures suffered a 21 percent median loss rate on sales of those assets.  The asset types with the largest differences between the two groups were OREO (21.03 percentage points), C&I loans (13.11 percentage points), and other assets (10.14 percentage points).  The asset type with the strongest correlation with the FDIC loss rate was C&I loans (0.44).  The correlations were significant for most asset types.  Thus it appears that asset quality at the high-cost institutions was worse across the board.

    Table 6

    The median loss rate on total assets for high-cost institutions was more than 2 1/2 times the loss rate for low-cost institutions.  The difference exceeded the differences for most asset types, indicating that the high-cost failures held more of their portfolios in the types of assets that experienced significant losses.48  This comports with our comparisons of low-cost and high-cost failures before failure (presented above).49

    Table 7

    The receivership data showed that reserve ratios were about the same for the two groups (3 percent of total assets).  Although the reserve ratios were much higher than industry averages (as a percentage of loans), reserves were much lower than the asset losses experienced by the receiverships.  Reserves covered only 29 percent of the losses for the low-cost failures and a mere 14 percent for the high-cost failures.50  Just as the coverage ratio was significantly higher for the low-cost failures, so were the reserves relative to actual losses.  Many of these banks, and particularly the high-cost banks, may have had inadequate reserves; however, we do not have enough information to determine this with certainty.51

    Receivership income, receivership expenses, and holding costs were all much higher for the high-cost institutions, and both the correlations and the differences in medians were highly significant.52  There was almost no receivership income, and there were almost no holding costs, for the low-cost institutions.  For the high-cost institutions, median receivership income over the life of the receivership was 4.58 percent of total assets at failure, and median holding costs over the life of the receivership were 3.34 percent of total assets at failure.53  The difference in median receivership expenses to total assets at failure was even larger:  2.97 percent for low-cost failures versus 11.24 percent for high-cost failures.  At 0.48, the correlation between receivership expenses and the FDIC loss rate was quite strong.  Even with the extra income as an offset, it is clear that the net effect of these items made up a large portion of the cost differential between these two groups of institutions.54

    These differences were closely tied to the enormous divergence in the portion of assets passed (that is, sold) to the acquirer.55  The FDIC passed 62.99 percent of assets to acquirers for the median low-cost institution, but only 24.84 percent for the median high-cost institution (figure 4).56  For 75 percent of the high-cost institutions, the FDIC retained over one-half of the assets in the receivership.  On the surface, the cause of these immense differences appears to be the FDIC’s sales methods.  The true cause is not as simple.

    One likely reason for the difference in assets sold to the acquirer is the quality of the assets held by the banks:  assets of the high-cost banks were generally riskier and of lower quality.  The markets for the riskier assets are typically thinner, and it may be harder to estimate the market price.  The due-diligence effort required for lower-quality assets is more extensive as well.  Thus there might have been fewer interested buyers, and the odds of finding a buyer to purchase the deposits and the riskier assets simultaneously—and on a relatively tight time line—would probably have been much smaller.

    Figure 4

    The marketplace differences discussed above would almost certainly have affected the asset pass rates and the loss rates for some types of assets.  As noted above, a relatively large proportion of the high-cost failures occurred when the industry was experiencing distress and the markets were still absorbing large amounts of troubled assets.  Our analysis suggests that real estate loans were a large problem for the high-cost failures.  Vacancy rates for office space dropped precipitously during the early years of our sample period, and the volume of sales transactions for office space increased dramatically in 1998.57  Therefore, some of the variance in receivership performance across the groups almost certainly was related to differences in market conditions.  The marketing process works best when the economy and the industry are performing well.58

    Policy changes made at the FDIC may also have contributed to this result.  The FDIC combined its Division of Resolutions and its Division of Depositor and Asset Services in 1996.  The merger facilitated a more cohesive sales approach that starts before failure and continues past resolution with few disruptions from changes in staff or strategy at resolution.  In 1997, the FDIC adopted a new asset sales procedure called “joint asset marketing.”  It emphasizes marketing homogeneous pools of assets to a large number of potential buyers quickly (preferably at resolution).  The new procedures, combined with the strong economy, have considerably shortened the pace of asset sales from FDIC receiverships since the mid-1990s.

    In summary, the results of the comparison of resolution and receivership results is consistent with the results from the pre-failure period.  For the most part, these results merely demonstrate the way losses are realized when banks fail.  However, the differences in asset pass rates and receivership performance may provide some evidence that FDIC losses are influenced by market conditions.  To the extent that this is the case, it may be difficult or impossible to develop changes in regulatory treatment that yield near-zero losses to the insurance funds during times of industry stress.

    Low-cost Failures versus Near-failures

    A comparison between low-cost failures and near-failures allows us to search, at the margin, for differences that distinguish failing banks from banks that can survive serious problems.  This comparison investigates the robustness of the earlier comparison between all failures and near-failures.  Any marginal distinctions may provide useful insights when considering marginal changes in the PCA threshold, or they might be helpful to examiners of similar banks.  Our comparisons, shown in tables 8 and 9, reveal only a few significant differences between the low-cost failures and the near-failures.  Some of these differences, such as capital levels, may relate to the fact that whereas some failures never fell below the PCA limit, all near-failures fell below the limit. 

    Median capital levels of the low-cost failures were significantly higher than those of the near-failures.  The median equity capital ratio for the low-cost failures was 2.09 percent at our analysis date, while the near-failures reported 0.97 percent when they fell below the PCA limit.  The core capital (leverage) ratio also was significantly higher for the low-cost failures, at 2.03 percent compared with 0.87 percent for the near-failures.  And deposits of the two groups were significantly different: deposits as a percentage of assets were a median 95 percent for low-cost failures but 98 percent for the near-failures.  This difference was probably caused by the difference in capital. 

    For the performance of the two groups, the overall ROA was not significantly different, but the operating ROA was.  The difference in overall ROA and operating ROA by our measures stemmed from the gains on the sales of securities and extraordinary items.  The low-cost failures reported a loss on assets of 5.23 percent, but this loss was less for operating earnings—4.85 percent.  The near-failures reported a median loss of 5.76 percent of assets, but a median operating loss of 6.04 percent of assets.  Although we did not calculate a ratio for the level of gains on the sales of securities or extraordinary items, the result on operating earnings implies that the low-cost failures reported non-operating gains that reduced their loss, while the near-failures reported non-operating losses that increased their loss.  The low-cost failures also reported significantly lower median costs of funding earning assets, at 3.69 percent, while the near-failures reported 4.37 percent.

    Table 8

    The levels of nonperforming assets reported by these groups did not differ very much, but the low-cost failures reported significantly lower net charge-offs on real estate loans.  Their net charge-offs were just 0.05 percent of real estate loans, while the near-failures had net charge-offs that were 1.26 percent of real estate loans over the last four quarters before the institutions fell below the PCA limit.  Both at the PCA date and one year earlier, the median coverage ratio was higher for the low-cost failures than for the near-failures.  The differences in medians were relatively large (69.03 versus 63.98 at the PCA date; 58.67 versus 43.47 one year earlier), but not statistically significant.

    Because the results for low-cost failures were remarkably similar—and in many ways superior—to the results for near-failures, one might conclude that the failed institutions could have survived, given the chance.  However, only two of these failures resulted in no costs to the FDIC; our review of these two extraordinary cases indicates that failure was appropriate.59  The remaining low-cost failures had loss rates that ranged from 3 percent to 12 percent of assets.  On the other hand, without the sale to another organization or an infusion of capital, some of the near-failures could easily have joined our list of failures.  All in all, this comparison highlights the difficulty of predicting failure.  Given the lack of quantitative differences between these two groups, it is hard to identify changes in the PCA threshold that would consistently improve the cost tradeoffs.

    Table 9

    Near-failures Purchased versus Near-failures That Survived

    We split the near-failures into groups based on the way they survived.  Twenty-one institutions were absorbed into other organizations within one year after they fell below the PCA limit, and eighteen near-failures survived their problems and remained independent for at least one year.  We expected that the institutions that survived might have been in slightly better condition than the institutions absorbed by other organizations, but very few differences were significant (see tables 10 and 11).

    Both the median and the average size of the survivors was much greater than the institutions purchased, but there were no statistically significant differences in asset size.  As of the PCA date, the 18 survivors had a median asset size of $91 million; the 21 institutions purchased had a median asset size of $38 million. 

    The survivors had significantly higher noninterest income (which is common for larger institutions) in the year before they fell below the PCA limit.  They reported a median 2.19 percent of earning assets in noninterest income, while the institutions that were purchased reported 0.83 percent.  This difference dwindled as the survivors approached the PCA threshold, and it was less significant by the time they fell below the PCA limit.  At that time they reported 1.49 percent, while the purchased institutions reported 0.83 percent.

    Table 10

    The near-failures that were purchased held a higher median percentage of commercial real estate and C&I loans, but this difference was not significant.  Capital plus reserves were higher for the purchased institutions, but again, the difference was not significant.  The level of nonperforming assets was lower for the purchased institutions, but not significantly.  The performance of the purchased institutions seemed worse because of higher losses, higher net charge-offs and higher provisions for loan losses, but none of these differences was significant.  Buyers may have been attracted to the higher levels of capital and reserves and lower nonperforming assets of the aquired institutions, but the lower performance might make a buyer take pause. 

    Tracking Near-failures That Survived One Year Later

    For the 18 near-failures that survived, we compared their results during the year they crossed the PCA threshold to the following year; the results are in table 12.  This group showed many significant differences across the two periods, mainly related to capital and performance. 

    Table 11

    All measures of capital were significantly higher, largely because of capital infusions.  Equity capital increased from a median of 1.00 percent as of the PCA date to 5.28 percent one year later.  These results provide some evidence of the importance of capital infusions that are large enough to ensure survival.  Barakova and Carey (2001) found that both failed banks and near-failures had capital infusions, but the infusions were larger for the near-failures.  We found that some of the failed institutions in our sample also had capital infusions, but the amounts were insufficient.  Regulators might be able to use this evidence to encourage troubled banks to seek enough capital to materially improve their survival chances.

    The capital position of the eighteen survivors improved much faster than nonperforming assets.  Nonperforming assets as a percentage of total assets improved from a median level of 7.89 percent to 4.80 percent, but this was insignificant.  Behind these numbers was a significant improvement in noncurrent C&I loans, down from 7.62 percent to 3.56 percent. 

    Over this one-year time span performance improved significantly, but these 18 institutions still reported a median loss on assets of 0.73 percent, down from a loss of 4.20 percent one year earlier.  The lower provision expense was a driving force in reducing losses.  Provision expenses fell significantly from a median of 1.96 percent of average assets to 0.60 percent.  Net charge-offs declined, but not significantly.  Provisions declined significantly from a median of 119 percent of net charge-offs to 54 percent one year later.

    Table 12

    Concluding Remarks

    Contrary to the expectations of many economists, PCA has not resulted in the FDIC’s experiencing little or no loss when a depository institution fails.  From 1994 to 2000, most failures imposed significant costs (as a percentage of assets) on the insurance funds.  However, 55 percent of the FDIC-insured institutions that fell below the PCA threshold for critically undercapitalized institutions avoided failure, and almost 30 percent of the failed institutions never breached the PCA threshold.

    We explored the tradeoffs associated with the PCA threshold for critically undercapitalized institutions.  When market-value solvent institutions breach the threshold, both financial costs and nonfinancial costs (loss of freedom) are imposed on many of the bank owners and, to a lesser extent, supervisors.  These costs are probably offset by savings associated with prompt closure (for failed institutions) and with higher survival rates (for troubled institutions, if the threshold successfully aids in recapitalizing some institutions that would otherwise fail).  Ideally, the PCA threshold would be set at the level that balances these tradeoffs to yield the highest net benefit to society.  Because the tradeoffs are difficult to measure, we do not know the optimum level.  However, the decidedly mixed outcomes of the institutions that have breached the threshold to date provide us with some assurance that the current threshold is not too wide off the mark.  If almost all of the institutions that breached the threshold had survived or if almost all of them had failed, then it would be more likely that an adjustment to the threshold level would yield substantial benefits. 

    We investigated the differences between the critically undercapitalized institutions that did and did not fail, in hopes of finding information that could be used to improve the regulatory treatment of seriously troubled banks.  The most meaningful differences across outcomes are related to nonperforming assets, coverage ratios (calculated as reserves divided by noncurrent loans), and the local economy.  The failed institutions had median nonperforming ratios that were significantly higher than those of the near-failures, particularly for real estate loans.  The high-cost failures had nonperforming levels that were roughly double those of the low-cost failures.  The nonperforming levels of the low-cost failures and of the near-failures were roughly the same.

    Of all the measures we tested, the coverage ratio appears to be the most useful indicator that serious losses may await the FDIC.  At the PCA date, the median coverage ratio was 43 percent for the failures and 64 percent for the near-failures.  However, the median coverage ratio was higher for the low-cost failures than for the near-failures (69 percent versus 64 percent).  The median coverage ratio for high-cost failures was only 38 percent.

    The high-cost failures were much more likely to be located in areas where a relatively large number of banks were experiencing problems.  Unlike the low-cost failures and near-failures, they were also more likely to have occurred in 1994 and 1995—a period at the end of the banking crisis, when some markets for troubled assets (particularly commercial real estate markets) were sluggish.  Thus some of the differences in outcome are probably related to the marketplace rather than to individual bank characteristics.  During periods of stress, asset values sometimes experience steep declines, and slow-moving markets increase the difficulty of measuring asset values.  Because of these phenomena, the FDIC may experience higher loss rates during periods of stress than during good times, even with PCA.

    The differences in other items were smaller.  The failed banks had somewhat riskier asset portfolios than the near-failures; likewise, the high-cost failures had riskier asset portfolios than the low-cost failures.  The performance measures were similar across groups.  There were few differences between the low-cost failures and near-failures, and—surprisingly—the differences tended to favor the low-cost failures.  The near-failures that were purchased were also similar to those that survived, except that the surviving institutions were larger. 

    There are several reasons why these results may not be robust in the future.  First, our sample period did not include a full business cycle, and we have found evidence, albeit limited, that the results vary across the business cycle.  Second, a disproportionate number of the high-cost failures occurred in 1994–1995; thus, regime changes (which we were unable to isolate) may have influenced the results.  Third, we did not test for intangible items such as the quality of bank management, which could be important.  Finally,
    historical results do not always provide a good indicator of future performance.


    The PCA regulations emphasize capital and not reserves.  Because we found that loan-loss reserves differentiate relatively strong and weak institutions that have already fallen below the PCA threshold, we think the level of reserves should be studied more closely.  Instead of trying to find a better threshold capital level for critically undercapitalized institutions, regulators may want to refine the rules governing reserves or limit the discretion of seriously troubled banks to set their own reserve levels.60  If troubled banks consistently adjust their reserves so they are always adequate to absorb the estimated credit losses associated with the banks’ loan portfolios, then capital would become a better measure of condition. 

    We recommend that regulators attempt to develop a formula for minimum reserve levels that could potentially be used to improve the supervision of seriously troubled institutions.61  Our results give us hope that such a formula would be feasible.62  Once such a formula were developed, regulators could require that seriously troubled banks set reserve levels by using the higher of their normal reserving procedures or the formulaic approach—at least for calculating regulatory capital.63  Alternatively, seriously troubled banks could be allowed to record reserve levels that fell below the formula only if approved by an examiner or the FDIC.64  Because many troubled institutions are slow to adjust reserve levels for deteriorating conditions, this approach could potentially reduce insurance fund costs by hastening the closure of non-viable banks.  This approach might also improve the tradeoffs associated with the 2 percent PCA threshold, since it appears that the high-cost failures would be more seriously impacted by such a change than the low-cost failures or the near-failures.  Alternatively, regulators could adopt other, less prescriptive ways to use this information in the supervisory process.

    Appendix 1

    Summary of the PCA Provisions in FDICIA

    The PCA provisions in FDICIA require that banking regulators take prespecified actions whenever bank capital levels fall below established levels.  Table A1-1 summarizes the requirements for each capital level, except for critically undercapitalized institutions (defined as institutions with a leverage ratio below 2 percent).

    Table A1-1

    Table A1-2 provides a summary of the required actions and limits set by FDICIA.  The limits are additive.  For example, the restrictions for a significantly undercapitalized bank include those for an undercapitalized institution as well.

    Table A1-2

    Appendix 2

    Details on Selected Calculations

    This appendix discusses selected calculations and accounting policies that influence the data sources used for this article.  Some of the accounting policies inhibit the comparability of failed-bank data across time.  The more material items are discussed here.

    Calculation of FDIC Loss

    At resolution, the FDIC generally bases its loss estimate on an Asset Valuation Review (AVR).  The AVR estimate of loss is calculated as the difference between the FDIC’s anticipated outlay and the net present value of the funds recovered from the receivership.65  As the receivership progresses, the loss calculation (as published by the FDIC in the Failed Bank Cost Analysis (FBCA)) changes somewhat:  it is essentially calculated as the FDIC’s resolution outlays minus the funds recovered from the receivership and the estimated funds to be recovered from the receivership in the future.66  Both cost figures exclude pre-closing expenses associated with preparing the bank for resolution and determining which deposit accounts are insured.67  These items tend to be a relatively small component of FDIC losses.

    The FDIC is required to fund the insured deposits at the time of failure; however, the receivership pays dividends to the FDIC and other creditors as assets are sold (after meeting expenses).  Thus, the FDIC has working capital requirements during the interim period between the failure date and the dates when dividends are paid by the receivership.

    Because funding costs are a real cost to the FDIC (in the form of lost interest income to the insurance fund) but are largely excluded from the FBCA, we estimated the funding cost for each bank where the FBCA figure was the most up-to-date published figure.68  To make the estimate, we collected the FDIC claim and dividend payments made through year-end 2000.  For open receiverships, we assumed that the remaining asset value (based on discounted cash flow, net of expenses) would be paid to the FDIC on December 31, 2000.  Then, we calculated the interest that the receivership owed to the FDIC on the portion of its claim that either had been paid or that we assumed would have been paid on December 31, 2000.  We used the FDIC’s average yield on its investments as the interest rate.  We also treated two items included in the FBCA figure as holding costs:  interest earned on the receivership’s cash balances (which reduced holding costs) and interest paid to other creditors by the receivership (which increased holding costs).  This allows for a more accurate comparison of the economic costs across receiverships, and of the initial cost estimate and the latest available cost estimate.69  Across the full sample of banks, the median difference between the FBCA cost and the cost used in this article was 3.75 percent of total assets as of the quarter-end date before failure. 

    Differences between GAAP and the FDIC’s Receivership Accounting Policies

    Whereas open banks normally prepare financial statements and Call Reports using accounting principles that are predicated on their being ongoing concerns, receiverships use cash-basis accounting.  The primary difference between bank and FDIC receivership accounting policies relates to the treatment of accrued items, reserves for troubled assets, and intangible assets.  Receiverships do not typically record accrued items or intangible assets, except in cases where certain intangible assets are recorded on the books at one dollar for control purposes.70  Receiverships record loans at gross book value, thereby reversing partial charge-offs or reserves recorded by the failed bank.71  These differences sometimes result in large changes in the equity ratio of a bank when the basis of accounting shifts at failure, at times resulting in a receivership’s initially showing more equity than the failed bank.  As assets are sold, receivership equity inevitably declines, reflecting the asset recovery received. 

    At closing, the FDIC calculates the institution’s closing balance sheet and then makes adjustments to conform to the receivership accounting policies.  Although we collected both initial balances, we relied primarily on the institution’s closing balance sheet for analysis because it is more comparable to the Call Report.  Thus, all references to the bank’s balance sheet at closing exclude the adjustments made to conform to receivership accounting policies unless an exception is cited.

    The data on asset composition as of the failure date and during the receivership should be interpreted with care, partly because of differences in the accounting basis (discussed above) and partly because of different asset categories.72  It appears likely that many—perhaps even all—of the failed banks used asset categories for their general ledgers that did not align closely with the asset categories used on the Call Report.73  Differences in asset category may be quite small for securities and other real estate owned (OREO) but appear much larger for loans.  Other differences may occur because of limits in data availability and time.  Therefore, the results should be interpreted with these caveats in mind.

    The financial statements of a receivership differ substantially from those of an ongoing bank or thrift in other ways as well.  For example, the liabilities are grouped according to claims that have been proven (or remain unproven), and the income statement does not include interest expenses for most classes of creditors.74

    Asset Losses, Charge-offs, and Reserves

    Asset losses are a primary factor that determines the FDIC loss.  There are substantial disparities in the information collected about asset losses of a failed bank over its life cycle.  This section discusses certain adjustments made to the receivership loss figures to improve the comparability across the bank’s life cycle and summarizes differences in policies and practices between the asset losses estimated at resolution and the asset losses recorded by the receivership.

    In preparation for resolution, the FDIC prepares an Asset Valuation Review (AVR) that estimates asset losses for the entire portfolio.  These losses are based on total assets—gross of reserves—as per the Call Report.75  To prepare the estimates, analysts review the available documentation, project all cash flows (including associated income and expenses), and discount the cash flows using a market-based discount rate.  These estimates are made for various pools of assets, which are typically packaged to facilitate the marketing process.  Because these packages do not always align with the asset categories recorded on the Call Report, we have largely omitted such comparisons.

    Receiverships record asset losses as the difference between the gross asset balance (after reversing reserves and, to some extent, charge-offs) and the sales price.  The receivership asset-loss figures are not discounted, and they exclude sales expenses and net income (loss) received prior to the sale.76  Because both the definition of asset loss and the asset categories differ between the AVR and the receivership, we made no attempt to analyze changes in loss estimates between resolution and year-end 2000 by asset category.

    Receiverships record judgments (awards made in a court) and certain deficiency balances (charged-off assets) at gross book value, whereas they are typically not recorded at all on the Call Reports.  For this analysis, we exclude judgments and deficiencies (both balances and losses) recorded by the receiverships to improve the comparability of losses over the life of the failed bank.

    Some of the receiverships had unsold assets as of year-end 2000.  To facilitate comparisons across the full sample of failed institutions, we estimated future losses by asset type and incorporated these estimates into the asset loss rates for the receiverships.  To prepare the estimates, we relied upon supporting documents for the FDIC’s year-end financial statements for 2000. 

    In addition to differences in practices and policies, the asset-loss data are difficult to interpret because of activity that occurs between the last Call Report and the failure date.  Because no financial statements are filed during this period, we have no information about charge-offs, loss provisions, asset sales, or realized losses.  If a bank sells a substantial amount of assets during the period, we have no record of the transaction.  We can merely make inferences based on balance sheet changes between these dates.  Comparisons also become difficult when interest rates or the health of the economy change during the course of the resolution and the receivership.

    In summary, comparisons of financial data over time and across the stages of a failed bank (pre-failure, failure, receivership) are difficult to interpret because of differences in accounting policy and data collection, changes in the economy, and missing data for a brief period.  For a typical failed bank, the initial equity balance of the receivership is markedly higher than the closing equity found on the Call Report.  Because many receiverships begin with positive equity balances, losses recorded on the income statement of the receiverships usually exceed the FDIC’s loss on its receivership claim.77  Asset losses recorded by the receivership may be either higher or lower than the reserve levels recorded by the bank and the original AVR asset-loss estimates because of different calculation methods—even in cases where original expectations are met exactly.  Therefore, one must be careful when interpreting comparisons of results over the life of a failed bank.

    Appendix 3

    Comparison of Asset Composition:  Call Report vs. Receivership

    We collected high-level balance sheet data as recorded both by the failed bank’s general ledger on the date of failure, and by the initial balance sheet of the receivership (that is, at failure), after adjustments were made over the life of the receivership.78  We also compared these to the final Call Report data filed by the institution.  The comparison is found in table A3-1.  All figures are shown as a percentage of total assets.

    The data on asset composition at receivership are difficult to interpret, partly because of accounting differences and partly because of different asset categories.  Appendix 2 discusses these obstacles to straightforward comparison.  The results should be interpreted with these caveats in mind.

    Both the low-cost and high-cost failures shrank between the final Call Report date and failure.  The median shrinkage was 7.10 percent for low-cost failures and 9.34 percent for high-cost failures.  Between failure and receivership, both groups experienced a small increase in assets (attributable largely to the reversal of reserves). 

    From the final Call Report date to failure, the median level of securities increased slightly for low-cost institutions and decreased slightly for high-cost institutions.  A review of the results for individual institutions indicates that a few of the low-cost institutions apparently sold a material amount of loans during the intervening period.

    The mortgage results were puzzling.  From the final Call Report date to failure, the median percentage of mortgages to total assets increased:  from 23.00 percent to 27.53 percent for low-cost failures, and from 17.58 percent to 22.09 percent for high-cost failures.  The increase continued in the receivership.  The change in median levels between the failure date and the receivership is negligible for the low-cost failures but large (22.09 percent at failure; 30.25 percent in receivership) for the high-cost failures.  Reviewing the results by institution, one infers that the largest factor is differences in asset category definitions.79  Some portion of the increase between the Call Report date and failure is probably attributable to reductions in other types of assets (thereby increasing the proportion of mortgages).

    Like the mortgage results, the C&I loan results were characterized by substantial swings that frequently appear to be changes in asset type definitions at failure.  The median percentages dropped for the low-cost failures.  For the high-cost failures, they increased between the Call Report date and failure but decreased during the receivership.

    The OREO results appear to be untainted by differences in asset category definitions.  Between the final Call Report and failure, there was a substantial change in the median ratio for high-cost failures (3.64 percent on the final Call Report, 5.69 percent at failure) but little change for the low-cost failures (0.69 percent on the final Call Report, 0.91 percent at failure).  The median increase in OREO during the receivership, calculated in percentage points of total assets at failure, was similar (1.96 percent for low-cost failures, 2.19 percent for high-cost failures).  These figures include foreclosure activity.

    There were few changes in reserves between the final Call Report date and failure.  Reserves are reversed in receivership.

    There were substantive changes in the median levels of other assets (including cash and fed funds).  From the final Call Report date to failure, the median level of other assets increased slightly for low-cost institutions (16.21 percent to 18.92 percent) and decreased slightly for high-cost institutions (19.54 percent to 14.24 percent).  Most institutions reduced their balances of other assets between the Call Report and failure, although a few institutions showed substantive increases because of apparent asset sales.  We were unable to ascertain the underlying reasons for the persistent reductions shortly before failure.  There were also large and persistent reductions in the receivership:  the most likely cause was write-offs of intangible assets, accrued interest, and prepaid expenses.

    Table A3-1


    Aggarwal, Raj, and Kevin Jacques.  2000.  The Impact of FDICIA and Prompt Corrective Action on Bank Capital and Risk:  Estimates Using a Simultaneous Equations Model.  Journal of Banking and Finance 25:1139–60.

    Barakova, Irina, and Mark Carey.  2001.  How Quickly do Troubled U.S. Banks Recapitalize?  With Implications for Portfolio VaR Credit Loss Horizons.  Working paper presented at the 2001 Financial Management Association meetings.

    Benston, George J., and George G. Kaufman.  1997.  FDICIA after Five Years.  Journal of Economic Perspectives 11, no. 3:139–58.

    Barth, Mary E., Wayne R. Landsman, and James M. Wahlen.  1995.  Fair Value Accounting:  Effects on Banks’ Earning Volatility, Regulatory Capital, and Value of Contractual Cash Flows.  Journal of Banking and Finance 19:577–605.

    Berger, Alan N., K. K. King, and J. M. O’Brien.  1991.  The Limitations of Market Value Accounting and a More Realistic Alternative.  Journal of Banking and Finance 15:753–83.

    Carnell, Richard Scott.  1997a.  FDICIA After Five Years:  What has Worked and What has Not? in FDICIA: Bank Reform Five Years Later and Five Years Ahead, edited by George G. Kaufman, 11–16, JAI Press.

    ———.  1997b.  A Partial Antidote to Perverse Incentives:  The FDIC Improvement Act of 1991 in FDICIA: Bank Reform Five Years Later and Five Years Ahead, edited by George G. Kaufman, 199–233, JAI Press.

    Dahl, Drew, John P. O’Keefe, and Gerald A. Hanweck.  1998.  The Influence of Examiners and Auditors on Loan-Loss Recognition.  FDIC Banking Review 11, no. 4:10–25.

    Eisenbeis, Robert A., and Larry D. Wall.  2002.  Reforming Deposit Insurance and FDICIA.  Federal Reserve Bank of Atlanta Economic Review 87, no. 1:1–16.

    Federal Deposit Insurance Corporation (FDIC).  1997.  History of the Eighties—Lessons for the Future: An Examination of the Banking Crises of the 1980s and Early 1990s.  2 vols.  FDIC.

    Gilbert, R. Alton.  1992.  The Effects of Legislating Prompt Corrective Action on the Bank Insurance Fund.  Federal Reserve Bank of St. Louis Review 42, no. 4:3–22.

    Gilbert, R. Alton, and Levis A. Kochin.  1989.  Local Economic Effects of Bank Failures, Journal of Financial Services Research 3:333–45.

    Gunther, Jeffrey W., and Robert R. Moore.  2000.  Financial Statements and Reality:  Do Troubled Banks Tell All?  Federal Reserve Bank of Dallas Economic and Financial Review (Third Quarter), 30–35.

    Jones, David S., and Kathleen Kuester King.  1995.  The Implementation of Prompt Corrective Action: An Assessment.  Journal of Banking and Finance 19:491–510.

    Kaufman, George G.  1997.  FDICIA After Five Years:  What has Worked and What has Not?, in FDICIA: Bank Reform Five Years Later and Five Years Ahead, edited by George G. Kaufman, 35–43, JAI Press.

    Mailath, George J., and Loretta J. Mester.  1993.  When Do Regulators Close Banks?  When Should They?  Working Paper no. 93–10.  Federal Reserve Bank of Philadelphia.

    McDill, Kathleen.  2002.  Federal Deposit Insurance Corporation memo on the loss rates of bank failures over the business cycle (January 28).

    Mishkin, Frederick S.  1997.  Evaluating FDICIA in FDICIA:  Bank Reform Five Years Later and Five Years Ahead, edited by George G. Kaufman, 17–32, JAI Press.

    Peek, Joe, and Eric S. Rosengren.  1996.  The Use of Capital Ratios to Trigger Intervention in Problem Banks:  Too Little, Too Late.  Federal Reserve Bank of Boston New England Economic Review (September/October), 49–58.

    ———.  1997.  Will Legislated Early Intervention Prevent the Next Banking Crisis?  Southern Economic Journal (July), 268–80.

    Shibut, Lynn, and Timothy Critchfield.  2000.  An Analysis of Low Cost Failures.  Unpublished manuscript  (December 11).  FDIC.

    U.S. General Accounting Office (GAO).  1990.  Bank Insurance Fund: Additional Reserves and Reforms Needed to Strengthen the Fund.  GAO/AFMD-90-100. 

    ———.  1992.  Depository Institutions:  Flexible Accounting Rules Lean to Inflated Financial Reports.  GAO/AFMD-92-52. 

    ———.  1996.  Bank and Thrift Regulation:  Implementation of FDICIA’s Prompt Regulatory Action Provisions.  GAO/GGD-97-18. 

    U.S. Department of the Treasury.  1991.  Modernizing the Financial System:  Recommendations for Safer, More Competitive Banks.  Publication L, no. 101–73.

  • Last Updated 7/25/2003 Questions, Suggestions & Requests

    Skip Footer back to content