The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
A few questions always asked by business leaders after seeing the
results of highly accurate machine learning models are as follows
- Are machine learning models interpretable and transparent?
- How can the results of the model be used to develop a business
strategy?
- Can the predictions from the model be used to explain to the
regulators why something was rejected or accepted based on model
prediction?
DataRobot does provide many diagnostics like partial dependence, feature impact, and prediction explanations to answer the above questions and using those diagnostics predictions can be converted to prescriptions for the business. In this vignette we would be covering prediction explanations. Partial dependence has been covered in detail in the companion vignette “Interpreting Predictive Models Using Partial Dependence Plots”.
The DataRobot modeling engine is a commercial product that supports the rapid development and evaluation of a large number of different predictive models from a single data source. The open-source R package datarobot allows users of the DataRobot modeling engine to interact with it from R, creating new modeling projects, examining model characteristics, and generating predictions from any of these models for a specified dataset. This vignette illustrates how to interact with DataRobot using datarobot package, build models, make prediction using a model and then use prediction explanations to explain why a model is predicting high or low. Prediction explanations can be used to answer the questions mentioned earlier.
Let’s load datarobot and other useful packages
library(httr)
library(knitr)
library(data.table)
To access the DataRobot modeling engine, it is necessary to establish an authenticated connection, which can be done in one of two ways. In both cases, the necessary information is an endpoint - the URL address of the specific DataRobot server being used - and a token, a previously validated access token.
token is unique for each DataRobot modeling engine account and can be accessed using the DataRobot webapp in the account profile section.
endpoint depends on DataRobot modeling engine
installation (cloud-based, on-prem…) you are using. Contact your
DataRobot admin for endpoint to use. The endpoint for
DataRobot cloud accounts is
https://app.datarobot.com/api/v2
The first access method uses a YAML configuration file with these two elements - labeled token and endpoint - located at $HOME/.config/datarobot/drconfig.yaml. If this file exists when the datarobot package is loaded, a connection to the DataRobot modeling engine is automatically established. It is also possible to establish a connection using this YAML file via the ConnectToDataRobot function, by specifying the configPath parameter.
The second method of establishing a connection to the DataRobot modeling engine is to call the function ConnectToDataRobot with the endpoint and token parameters.
library(datarobot)
<- "https://<YOUR ENDPOINT HERE>/api/v2"
endpoint <- "<YOUR API TOKEN HERE>"
apiToken ConnectToDataRobot(endpoint = endpoint, token = apiToken)
We would be using a sample dataset related to credit scoring open sourced by LendingClub. Below is the information related to the variables.
loan_amnt | Min. : 500 | 1st Qu.: 5000 | Median : 9600 | Mean :11036 | 3rd Qu.:15000 | Max. :35000 | NA |
funded_amnt | Min. : 500 | 1st Qu.: 5000 | Median : 9250 | Mean :10766 | 3rd Qu.:15000 | Max. :35000 | NA |
term | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
int_rate | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
installment | Min. : 15.69 | 1st Qu.: 163.47 | Median : 275.11 | Mean : 321.43 | 3rd Qu.: 427.04 | Max. :1276.60 | NA |
grade | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
sub_grade | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
emp_title | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
emp_length | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
home_ownership | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
annual_inc | Min. : 2000 | 1st Qu.: 40000 | Median : 58000 | Mean : 68203 | 3rd Qu.: 82000 | Max. :900000 | NA’s :1 |
verification_status | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
pymnt_plan | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
url | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
desc | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
purpose | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
title | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
zip_code | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
addr_state | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
dti | Min. : 0.00 | 1st Qu.: 8.16 | Median :13.41 | Mean :13.34 | 3rd Qu.:18.69 | Max. :29.99 | NA |
delinq_2yrs | Min. : 0.0000 | 1st Qu.: 0.0000 | Median : 0.0000 | Mean : 0.1482 | 3rd Qu.: 0.0000 | Max. :11.0000 | NA’s :5 |
earliest_cr_line | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
inq_last_6mths | Min. : 0.000 | 1st Qu.: 0.000 | Median : 1.000 | Mean : 1.067 | 3rd Qu.: 2.000 | Max. :25.000 | NA’s :5 |
mths_since_last_delinq | Min. : 0.00 | 1st Qu.: 18.00 | Median : 34.00 | Mean : 35.89 | 3rd Qu.: 53.00 | Max. :120.00 | NA’s :6316 |
mths_since_last_record | Min. : 0.00 | 1st Qu.: 0.00 | Median : 86.00 | Mean : 61.65 | 3rd Qu.:101.00 | Max. :119.00 | NA’s :9160 |
open_acc | Min. : 1.000 | 1st Qu.: 6.000 | Median : 9.000 | Mean : 9.335 | 3rd Qu.:12.000 | Max. :39.000 | NA’s :5 |
pub_rec | Min. :0.00000 | 1st Qu.:0.00000 | Median :0.00000 | Mean :0.06013 | 3rd Qu.:0.00000 | Max. :3.00000 | NA’s :5 |
revol_bal | Min. : 0 | 1st Qu.: 3524 | Median : 8646 | Mean : 14271 | 3rd Qu.: 16952 | Max. :1207359 | NA |
revol_util | Min. : 0.00 | 1st Qu.: 25.00 | Median : 48.70 | Mean : 48.47 | 3rd Qu.: 71.90 | Max. :108.80 | NA’s :23 |
total_acc | Min. : 1.00 | 1st Qu.:13.00 | Median :20.00 | Mean :22.09 | 3rd Qu.:29.00 | Max. :90.00 | NA’s :5 |
initial_list_status | Length:10000 | Class :character | Mode :character | NA | NA | NA | NA |
mths_since_last_major_derog | Mode:logical | NA’s:10000 | NA | NA | NA | NA | NA |
policy_code | Min. :1 | 1st Qu.:1 | Median :1 | Mean :1 | 3rd Qu.:1 | Max. :1 | NA |
is_bad | Min. :0.0000 | 1st Qu.:0.0000 | Median :0.0000 | Mean :0.1295 | 3rd Qu.:0.0000 | Max. :1.0000 | NA |
Let’s divide our data in train and test. We can use train data to create a datarobot project using StartProject function and test data to make predictions and generate prediction explanations. Detailed explanation about creating projects was described in the vignette , “Introduction to the DataRobot R Package.” The specific sequence used here was:
<- "is_bad"
target <- "Credit Scoring"
projectName
set.seed(1111)
<- sample(nrow(Lending), round(0.9 * nrow(Lending)), replace = FALSE)
split <- Lending[split,]
train <- Lending[-split,]
test
<- StartProject(dataSource = train,
project projectName = projectName,
target = target,
workerCount = "max",
wait = TRUE)
Once the modeling process has completed, the ListModels function returns an S3 object of class “listOfModels” that characterizes all of the models in a specified DataRobot project.
<- as.data.frame(ListModels(project))
results kable(head(results), longtable = TRUE, booktabs = TRUE, row.names = TRUE)
modelType | expandedModel | modelId | blueprintId | featurelistName | featurelistId | samplePct | validationMetric | |
---|---|---|---|---|---|---|---|---|
1 | ENET Blender | ENET Blender | 5cdaff5621538736b976979e | d451f445e32c473bd7250b175e2a8759 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 64 | 0.32417 |
2 | Gradient Boosted Greedy Trees Classifier with Early Stopping | Gradient Boosted Greedy Trees Classifier with Early Stopping::One-Hot Encoding::Univariate credibility estimates with ElasticNet::Category Count::Converter for Text Mining::Auto-Tuned Word N-Gram Text Modeler using token occurrences::Missing Values Imputed::Search for differences::Search for ratios | 5cdafc69215387127c7697ac | 788bd071d49fc43acc55e646c3341445 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 64 | 0.32456 |
3 | ENET Blender | ENET Blender | 5cdaff5621538736b97697a0 | 64080ee61be8e47ab6e09c35d9868972 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 64 | 0.32475 |
4 | Advanced AVG Blender | Advanced AVG Blender | 5cdaff5521538736b976979c | 00790837050987b652403c758b2a90f1 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 64 | 0.32485 |
5 | Gradient Boosted Greedy Trees Classifier with Early Stopping | Gradient Boosted Greedy Trees Classifier with Early Stopping::One-Hot Encoding::Univariate credibility estimates with ElasticNet::Category Count::Converter for Text Mining::Auto-Tuned Word N-Gram Text Modeler using token occurrences::Missing Values Imputed::Search for differences::Search for ratios | 5cdafe722153872da07697be | 788bd071d49fc43acc55e646c3341445 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 80 | 0.32549 |
6 | AVG Blender | AVG Blender | 5cdaff5521538736b976979a | fe81e57d50d41f91322af82b7973d878 | Informative Features | 5cdafa9c319c7b34caeeeb75 | 64 | 0.32558 |
Let’s look at some model predictions. The generation of model
predictions uses the Predict
function:
<- GetRecommendedModel(project)
bestModel <- Predict(bestModel, test, type = "probability")
bestPredictions <- data.frame(original = test$is_bad, prediction = bestPredictions)
testPredictions kable(head(testPredictions), longtable = TRUE, booktabs = TRUE, row.names = TRUE)
original | prediction | |
---|---|---|
1 | 0 | 0.0714697 |
2 | 0 | 0.1396974 |
3 | 0 | 0.0925635 |
4 | 0 | 0.0593510 |
5 | 0 | 0.1172840 |
6 | 0 | 0.1049122 |
For each prediction, DataRobot provides an ordered list of explanations; the number of explanations is based on the setting. Each explanation is a feature from the dataset and its corresponding value, accompanied by a qualitative indicator of the explanation’s strength—strong (+++), medium (++), or weak (+) positive or negative (-) influence.
There are three main inputs you can set for DataRobot to use when
computing prediction explanations
1. maxExplanations
: the Number of explanations for each
predictions. Default is 3.
2. thresholdLow
: Probability threshold below which
DataRobot should calculate prediction explanations.
3. thresholdHigh
: Probability threshold above which
DataRobot should calculate prediction explanations.
<- GetPredictionExplanations(bestModel, test, maxExplanations = 3,
explanations thresholdLow = 0.25, thresholdHigh = 0.75)
kable(head(explanations), longtable = TRUE, booktabs = TRUE, row.names = TRUE)
rowId | prediction | class1Label | class1Probability | class2Label | class2Probability | reason1FeatureName | reason1FeatureValue | reason1QualitativeStrength | reason1Strength | reason1Label | reason2FeatureName | reason2FeatureValue | reason2QualitativeStrength | reason2Strength | reason2Label | reason3FeatureName | reason3FeatureValue | reason3QualitativeStrength | reason3Strength | reason3Label | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | 0 | 1 | 0.0714697 | 0 | 0.9285303 | revol_util | 37.1 | — | -0.2260173 | 1 | Notes | ++ | 0.1487372 | 1 | total_acc | 23 | – | -0.1403030 | 1 | |
2 | 1 | 0 | 1 | 0.1396974 | 0 | 0.8603026 | inq_last_6mths | 2 | +++ | 0.2362790 | 1 | total_acc | 27 | — | -0.2090679 | 1 | purpose | Consolidation | ++ | 0.1447419 | 1 |
3 | 2 | 0 | 1 | 0.0925635 | 0 | 0.9074365 | annual_inc | 40000 | +++ | 0.3073045 | 1 | purpose | FICO score 762 want’s to buy a new car | – | -0.2362631 | 1 | revol_util | 18.3 | – | -0.2127375 | 1 |
4 | 3 | 0 | 1 | 0.0593510 | 0 | 0.9406490 | revol_util | 39.5 | — | -0.2576392 | 1 | total_acc | 24 | — | -0.2124569 | 1 | annual_inc | 112000 | – | -0.2024626 | 1 |
5 | 4 | 0 | 1 | 0.1172840 | 0 | 0.8827160 | revol_util | 62.1 | +++ | 0.2626324 | 1 | earliest_cr_line (Day of Week) | 2 | – | -0.1870957 | 1 | total_acc | 19 | – | -0.1591037 | 1 |
6 | 5 | 0 | 1 | 0.1049122 | 0 | 0.8950878 | inq_last_6mths | 3 | ++ | 0.2975595 | 1 | total_acc | 24 | – | -0.2587315 | 1 | purpose | Paying off a personal loan | ++ | 0.2087095 | 1 |
From the example above, you could answer “Why did the model give one of the customers a 97% probability of defaulting?” Top explanation explains that purpose_cat of loan was “credit card small business”” and we can also see in above example that whenever model is predicting high probability of default, purpose_cat is related to small business.
Some notes on explanations:
- If the data points are very similar, the explanations can list the
same rounded up values.
- It is possible to have a explanation state of MISSING if a “missing
value” was important in making the prediction.
- Typically, the top explanations for a prediction have the same
direction as the outcome, but it’s possible that with interaction
effects or correlations among variables a explanation could, for
instance, have a strong positive impact on a negative prediction.
In some projects – such as insurance projects – the prediction
adjusted by exposure is more useful to look at than just raw prediction.
For example, the raw prediction (e.g. claim counts) is divided by
exposure (e.g. time) in the project with exposure column. The adjusted
prediction provides insights with regard to the predicted claim counts
per unit of time. To include that information, set
excludeAdjustedPredictions
to False in correspondent method
calls.
<- GetPredictionExplanations(bestModel, test, maxExplanations = 3,
explanations thresholdLow = 0.25, thresholdHigh = 0.75,
excludeAdjustedPredictions = FALSE)
kable(head(explanations), longtable = TRUE, booktabs = TRUE, row.names = TRUE)
rowId | prediction | class1Label | class1Probability | class2Label | class2Probability | reason1FeatureName | reason1FeatureValue | reason1QualitativeStrength | reason1Strength | reason1Label | reason2FeatureName | reason2FeatureValue | reason2QualitativeStrength | reason2Strength | reason2Label | reason3FeatureName | reason3FeatureValue | reason3QualitativeStrength | reason3Strength | reason3Label | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | 0 | 1 | 0.0714697 | 0 | 0.9285303 | revol_util | 37.1 | — | -0.2260173 | 1 | Notes | ++ | 0.1487372 | 1 | total_acc | 23 | – | -0.1403030 | 1 | |
2 | 1 | 0 | 1 | 0.1396974 | 0 | 0.8603026 | inq_last_6mths | 2 | +++ | 0.2362790 | 1 | total_acc | 27 | — | -0.2090679 | 1 | purpose | Consolidation | ++ | 0.1447419 | 1 |
3 | 2 | 0 | 1 | 0.0925635 | 0 | 0.9074365 | annual_inc | 40000 | +++ | 0.3073045 | 1 | purpose | FICO score 762 want’s to buy a new car | – | -0.2362631 | 1 | revol_util | 18.3 | – | -0.2127375 | 1 |
4 | 3 | 0 | 1 | 0.0593510 | 0 | 0.9406490 | revol_util | 39.5 | — | -0.2576392 | 1 | total_acc | 24 | — | -0.2124569 | 1 | annual_inc | 112000 | – | -0.2024626 | 1 |
5 | 4 | 0 | 1 | 0.1172840 | 0 | 0.8827160 | revol_util | 62.1 | +++ | 0.2626324 | 1 | earliest_cr_line (Day of Week) | 2 | – | -0.1870957 | 1 | total_acc | 19 | – | -0.1591037 | 1 |
6 | 5 | 0 | 1 | 0.1049122 | 0 | 0.8950878 | inq_last_6mths | 3 | ++ | 0.2975595 | 1 | total_acc | 24 | – | -0.2587315 | 1 | purpose | Paying off a personal loan | ++ | 0.2087095 | 1 |
This note has described the Prediction Explanations which are useful for understanding why model is predicting high or low predictions for a specific case. DataRobot also provides qualitative strength of each explanation. Prediction Explanations can be used in developing good business strategy by taking prescriptions based on the explanations which are responsible for high or low predictions. They are also useful in explaining the actions taken based on the model predictions to regulatory or compliance department within an organization.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.