The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Implements a novel predictive model, Partially Interpretable Estimators (PIE), which jointly trains an interpretable model and a black-box model to achieve high predictive performance as well as partial model. See the paper, Wang, Yang, Li, and Wang (2021) <doi:10.48550/arXiv.2105.02410>.
Version: | 1.0.0 |
Depends: | R (≥ 3.5.0), gglasso, xgboost |
Imports: | splines, stats |
Suggests: | knitr, rmarkdown |
Published: | 2025-01-27 |
DOI: | 10.32614/CRAN.package.PIE |
Author: | Tong Wang [aut], Jingyi Yang [aut, cre], Yunyi Li [aut], Boxiang Wang [aut] |
Maintainer: | Jingyi Yang <jy4057 at stern.nyu.edu> |
License: | GPL-2 |
NeedsCompilation: | no |
Citation: | PIE citation info |
CRAN checks: | PIE results |
Reference manual: | PIE.pdf |
Vignettes: |
Introduction to PIE – A Partially Interpretable Model with Black-box Refinement (source) |
Package source: | PIE_1.0.0.tar.gz |
Windows binaries: | r-devel: not available, r-release: PIE_1.0.0.zip, r-oldrel: PIE_1.0.0.zip |
macOS binaries: | r-release (arm64): PIE_1.0.0.tgz, r-oldrel (arm64): not available, r-release (x86_64): not available, r-oldrel (x86_64): not available |
Please use the canonical form https://CRAN.R-project.org/package=PIE to link to this page.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.