The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
PIE
The PIE
package implements Partially Interpretable
Estimators (PIE), a framework that jointly train an interpretable model
and a black-box model to achieve high predictive performance as well as
partial model transparency.
To install the development version from GitHub, run the following:
This section demonstrates how to generate synthetic data for transfer learning and apply the ART framework using different models.
The function data_process()
allows you to process
dataset into the format that fits with PIE model, including
cross-validation dataset (such as training, validation and testing) and
group indicators for group lasso.
library(PIE)
# Load the training data
data("winequality")
# Which columns are numerical?
num_col <- 1:11
# Which columns are categorical?
cat_col <- 12
# Which column is the response?
y_col <- ncol(winequality)
# Data Processing
dat <- data_process(X = as.matrix(winequality[, -y_col]),
y = winequality[, y_col],
num_col = num_col, cat_col = cat_col, y_col = y_col)
Once the data is prepared, you can use the PIE_fit()
function to train PIE model. In this example, we fit only with 5
iterations using group lasso and XGBoost models.
Once your PIE model is trained, you can use the
PIE_predict()
function to predict on test data.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.