Monday, June 15, 2009

Mapping Credit Models to Actual Defaults, Key Issues and Implications

Posted on RiskCenter.com by Donald R. van Deventer:

Credit models fall into two broad classes. In the first class of models, a statistical technique like logistic regression is used to fit explanatory variables to a “default flag” that is 1 if the counterparty defaults in that period, and 0 if there is no default. In the second class of models, there is a two stage process one goes through. In stage 1, a theory of credit risk is used to come up with a default probability for each counterparty. In stage 2, this theoretical default probability is benchmarked to actual default rates, which may be substantially different from what is implied by the theory. The first class of models doesn’t require benchmarking to actual defaults because that’s accomplished by the statistical technique automatically. In this blog post, we talk about key issues and implications of benchmarking the second class of models to actual default experience.

There are a number of model types that require mapping to actual default experience:

  • The Merton model of risky debt, typically used for public firms
  • Internal and third party ratings
  • Retail credit scores

In the case of ratings (either internal or external) and credit scores, the rater or model builder may cite historical default rates associated with each rating or credit score level. Benchmarking allows a mapping from the rating or credit score to actual default experience over a time period that may be different from the time period over which the ratings methodology or credit scoring technique was developed. Benchmarking is also essential due diligence, required by Basel II guidelines, that ensures that the ratings or credit scores work with the accuracy asserted by those doing the ratings or model construction.

Most analysts do the mapping in such a way that the ordinal ranking of companies or individuals by riskiness is not changed. The default probabilities are simply adjusted up or down to better fit history. If A is rated more risky than B before the mapping to actual experience, after the mapping, this is still true. A monotonic mapping of theoretical to actual default experience leaves unchanged the accuracy measures like the Receiver Operating Characteristics (ROC) accuracy ratio or Jorge Sobehart’s cumulative accuracy profile concept. Since these accuracy measures are a function only of the order in which counterparties are ranked, they are not changed by the mapping to actual default rates as long as this mapping is monotonic. Many credit market participants erroneously believe that the mapping process changes these accuracy measures. What the mapping does change is accuracy in another sense—the consistency of actual defaults with expected defaults both over time and by default probability level.

Another nuance in mapping the theoretical default rate to the actual default rate is very important. If the underlying theoretical model is highly accurate, as the theoretical default probability rises the mapped default probability will rise sharply toward 100%. If the underlying theoretical model is very inaccurate, as the theoretical default probability rises, the mapped default probability will either rise very modestly or remain flat, and the highest level it will achieve will be very low.

We illustrate this phenomenon with a silly model that postulates that the default probability of a public firm is lowest for CEOs with very gray hair and highest for firms whose CEOs do not have any gray hair. The model builder assets that default is highly correlated with the degree of grayness, as shown in the following chart where 10 indicates hair that is completely gray and 1 indicates hair that is not gray at all:

Case I: No Accuracy





Grayness Scale

Model Default Rate

Actual Default Rate

1

99.00%

1.00%

2

50.00%

1.00%

3

30.00%

1.00%

4

10.00%

1.00%

5

3.00%

1.00%

6

1.00%

1.00%

7

0.20%

1.00%

8

0.10%

1.00%

9

0.05%

1.00%

10

0.01%

1.00%

In case 1, the actual default rate for each of the ten “gray scales” is 1% everywhere, in spite of the analyst’s assertions that the model works. The correct mapping to actual defaults is f(gray scale)=1.00%. Instead of inserting the gray scale into this mapping function, we could have input the theoretical default probability. In actual practice Kamakura Risk Information Services mapping is done at the individual observation level (such as the 1.4 million observations in the KRIS version 4.1 Merton model) using logistic regression. For exposition purposes, we assume that the mapping is a linear function of the gray scale. The implications of this mapping are very strong—as the risk index (gray scale or the theoretical default probability) increases, the default probability stays constant and never rises above 1%. This is not indicative of bad mapping—it’s indicative of a model with zero accuracy.

Consider Case 2, where the gray scale is more consistent with the probability of default:

Case 2: Moderate Correlation




Grayness Scale

Model Default Rate

Actual Default Rate

1

99.00%

10.00%

2

50.00%

9.00%

3

30.00%

8.00%

4

10.00%

7.00%

5

3.00%

6.00%

6

1.00%

5.00%

7

0.20%

4.00%

8

0.10%

3.00%

9

0.05%

2.00%

10

0.01%

1.00%

In this case, the mapping function that most accurately converts the gray scale to the actual default rates is f(gray scale)=11%-0.01(gray scale). The mapping function using the “model” default rates as input (instead of the gray scale) is not linear but it produces the same mapped default rates. Note something very important in this example: as the theoretical default rates approach 99%, the mapped or expected default rates never exceed 10%! This is because of the fundamental lack of accuracy of the gray scale model—even at the riskiest default probability or gray scale level, the actual default rate never exceeds 10%. The mapping process should preserve this implication. Mapping the theoretical default rate of 99% or the gray scale of 1 to any default probability higher than 10% is a serious modeling error that is in violation of the requirements of good corporate governance and the Basel II guidelines.

Let’s look at one more case, Case 3 where the gray scale model is much more consistent with actual default rates:

Case 3: High Correlation




Grayness Scale

Model Default Rate

Actual Default Rate

1

99.00%

37.00%

2

50.00%

33.00%

3

30.00%

29.00%

4

10.00%

25.00%

5

3.00%

21.00%

6

1.00%

17.00%

7

0.20%

13.00%

8

0.10%

9.00%

9

0.05%

5.00%

10

0.01%

1.00%

On the Case 3 data set, the gray scale model works much better. The mapping function is f(gray scale)=41%-0.04(gray scale). The mapped gray scale model will show no default probability less than 1% nor more than 37%, but it does increase by 36% as the gray scale goes to 10.

These implications are very important for model building and for drawing implications from a theoretical model that has been properly benchmarked. If a mapped model shows a very gentle rise to a low level of default probabilities at the highest risk grade, what that means is that the underlying theoretical model is not very accurate. It does not mean that there is anything wrong with the mapping procedures. It’s the statistical equivalent of trying to make a silk purse out of a sow’s ear.

As Xiaoming Wang, Li Li, and I reported in “Advanced Credit Model Testing to Meet Basel II Requirements: How Things Have Changed,” (The Basel Handbook, second edition, RISK Publications, 2007, Michael Ong editor), the relative inaccuracy of the theoretical Merton model produces this kind of gently sloping “mapped model.” Even at the 99% theoretical Merton default probability level, the actual default rate is well under 10%. Doing the mapping of theoretical to actual default probabilities accurately is essential to best practice risk management. One must take great care not to overstate the accuracy of a model by mapping the theoretical default probabilities to a higher level than the empirical evidence indicates.

No comments: