Provides SHAP explanations of machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP) introduced by Lundberg, S., et al., (2016) <arXiv:1705.07874> The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique used in game theory. The R package 'shapper' is a port of the Python library 'shap'.
|Imports:||reticulate, DALEX, ggplot2|
|Suggests:||covr, knitr, randomForest, rpart, testthat, markdown, qpdf|
|Author:||Szymon Maksymiuk [aut, cre], Alicja Gosiewska [aut], Przemyslaw Biecek [aut], Mateusz Staniak [ctb], Michal Burdukiewicz [ctb]|
|Maintainer:||Szymon Maksymiuk <sz.maksymiuk at gmail.com>|
|License:||GPL-2 | GPL-3 [expanded from: GPL]|
|CRAN checks:||shapper results|
How to use shapper for classification
How to use shapper for regression
|Windows binaries:||r-devel: shapper_0.1.3.zip, r-release: shapper_0.1.3.zip, r-oldrel: shapper_0.1.3.zip|
|macOS binaries:||r-release (arm64): shapper_0.1.3.tgz, r-release (x86_64): shapper_0.1.3.tgz, r-oldrel: shapper_0.1.3.tgz|
|Old sources:||shapper archive|
Please use the canonical form https://CRAN.R-project.org/package=shapper to link to this page.