10.12.2019 | Dr. Dibyajyoti Dutta | comment icon 0 Comment

Credit Risk Analysis Using Machine Learning

Approving loans without proper scientific evaluation increases the risk of default. This can lead to bankruptcy of lending agencies and consequently the destabilization of the banking system. This is what happened in the 2008 financial crisis which affected the world economy adversely. Three components decide the amount of loss that a firm faces as a result of loan default:

  1. Probability of Default (PD)
  2. Exposure at Default (EAD)
  3. Loss given Default (LGD)

The expected loss (ELoss) is the simple product of these three quantities:

ELoss=PD⋅EAD⋅LGD

Our focus here is on the Probability of Default (PD). Here, we will look at the example of German Credit data which is taken from the Kaggle database.

Data Exploration

As a first step, we look at the data. Numpy and Pandas libraries in python are excellent tools for data exploration. For the data visualization we mainly use the matplotlib and the seaborn libraries. We import these libraries into our workspace.

Now we import the data file and inspect it.

AgeSexJobHousingSaving accountsChecking accountsDurationPurposeRisk
67male2ownNaNlittle6radio/TVgood
22female2ownlittlemoderate48radio/TVbad
49male1ownlittleNaN12educationgood
45 male 2freelittlelittle42furnituregood
53male2freelittlelittle24carbad


The first eight columns are the feature variables and the last column (Risk) is the target variable, which we want to classify as “good” or “bad”. The purpose of the Machine Learning model is to capture the relations between the features and the target variables and predict the credit risk for future applicants.

Most Machine Learning models cannot handle missing values within the feature space, or these can adversely diminish the prediction power of the model. Therefore, we need to check for them:

Featuresmissing_value
Age0
Sex0
Job0
Hosing0
Saving accounts183
Checking accounts394
Credit amount0
Duration0
Purpose0
Risk0


We need to handle these missing values in a later section. But before we actively change the dataset, we will continue our exploration by trying to figure out, which of the features affect the risk.

Data visualization

Are females more likely to default or is it less risky to lend money to rich people? These kinds of questions can qualitatively be answered by visualizing the data. We create a sub-table for each feature variable in question.

Riskbadgood
Sex
———–———-———-
female35.1664.83
male27.6872.31

The values presented here are in percentage. It seems, the feature “Sex” contains valuable information for the classification. In this data set, females are slightly more likely to default (however, this cannot be used as a general conclusion). We can better perceive it from the graph below.

Similar analysis can also be performed based on the wealth in the checking account which are categorized as ‘little’, ‘moderate’ or ‘rich’.

The graph tells us that wealthy people are less likely to default. Now that we have seen the importance of the features, it is time to employ our Machine Learning model which can quantitatively capture the feature patterns and subsequently predict the risk.

Building models

From the raw data we saw that the target variable risk is categorical (‘good’ or ‘bad’). Therefore, this is a classification problem. There are many classification algorithms in the literature with the Random Forest classifier being considered one of the standard classifiers.

As mentioned before, we can notice that many columns have missing entries and also non-numerical data. Machine Learning algorithms cannot handle these values. Therefore, we need to clean the data and transform it to a numerical form.

It is possible to completely drop the rows if there are any missing values in that row. However, if we do that, we will lose a large amount of data as we have seen from the Data Exploration section. We replace the missing values with a separate category ‘None’ (although it is not accurate). At this point we can just encode the categorical values into numerical values using a method called OneHotEncoder (part of the SciKit learn library in Python). For example, the ‘good’ risk and ‘bad’ risk can be transformed as ‘1’ and ‘0’. The rent column can be transformed into three columns named ‘own’ and ‘rent’ and ‘free’, where if the applicant has his own house, the ‘own’ column will have entry ‘1’ and the other two columns will have entry ‘0’. We then drop one of the columns to avoid data redundancy. After processing the data, we get the dataset like:

AgeSexJobownrent..radio/TVrepairsvacationrisk
671210..1001
220210..1000
491110..0001


Now that we have all the features and target variable in the numerical form, we can finally train and fit our model. For that we should first split our data into a train set and a test set. The train set would be used to capture the relations between the features and the target variables while the test set will be used to verify the performance of the model.

Hyperparameters are the set of parameters which should be set by the user before the training. By contrast normal parameters are tuned while the model is trained. This model has several hyperparameters and therefore the number of possible combinations is exponentially high. We tune the hyperparameters by automatizing the process via a so-called Parameter Grid in SciKit learn, while training the model.

Once the hyperparameters are tuned, the metrics of the model are calculated. The results can be evaluated by a confusion matrix, which is a useful tool for measuring the quality of classifications. The confusion matrix can be invoked as such:

positivenegative
positive3440
negative12164


The accuracy and F1 score of the model are 0.79 and 0.86 respectively. Despite, limited amount of data and many missing values, the model performance is quite good.

Feature Selection

Now from the hyperparamter tuned model we can see which features are most relevant. From the plot below we can see that the most relevant features for classification are ‘Sex’, ‘Job’, ‘Age’ and ‘Checking account’. Sometimes there are scenarios where the dataset is huge and consists of multitude of features. In those cases, it is sometimes wise to drop the least relevant feature to speed up computation time.

Acceptance rate and bad rate

Acceptance rate is the percentage of the new loans that are accepted. The goal is to keep the number of defaults as low as possible, whilst maximizing profit by handing out loans. Based on our model we can calculate a threshold for a given acceptance rate. If a probability of default for a new credit is lower than the threshold then it is accepted, otherwise rejected:

Loanprob_defaultthresholdaccept or reject
10.700.81Accept
20.830.81Reject


We assume, the acceptance rate of a bank is 85%. What should be the threshold? For this data set and with our model the threshold is calculated to 0.55. The Numpy package can be used to calculate the threshold:

The figure below shows the threshold as the black vertical line:

For different acceptance rates, the threshold values vary. Even with the calculated threshold, there could be accepted loans which are considered to default. A bad rate is the percentage of accepted loans, which are defaulting. In our model, the bad rate has been calculated to 23 %.

Conclusion

Machine Learning based models are very pragmatic when it comes to minimizing the risk. Increasing number of banks are choosing such models for their risk analysis and to minimize the loss by preventing defaults. There are a multitude of Machine Learning techniques for the purpose of predicting the real values, classification, clustering etc. We only scratched the surface with our case. However, certain things should considered when applying Machine Learning techniques:

  1. The model itself should explain the result. If an algorithm denies a loan of an applicant, it is important for the bank to know the explanation. Otherwise they could even face legal obligations from the applicant. Therefore, it is sometimes reasonable to use relatively simple models (like logistic regression, Gradient boosted trees, Random Forest as in our case) instead of complicated models like Neural Networks.
  2. Recently Apple’s credit card has been accused of discriminating against women. Models can be socially prejudiced as they take insights from the historical data. In our case too, the model predicted that the females are more likely to default. This is because, we have performed a Discrete-Time hazard model where the probability of default is a point-in-time event. A good model however should incorporate the evolution of the impacts of the features on the risk over a period. These kinds of models are called Through-the-cycle (TTC) models, which consider the influence of the macroeconomic situation, social evolution and other factors.
credit risk data science machine learning probability of default through the cycle ttc
Dr. Dibyajyoti Dutta

Dr. Dibyajyoti Dutta

Dibyajyoti holds a degree from HBNI-University in India and a PhD degree in Physics from TU Kaiserslautern. His work has been published in numerous international journals and conferences. He specialises on the financial market by conducting mathematical modeling and data analysis.

Show all posts by Dr. Dibyajyoti Dutta

Leave a Comment