The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Standardized accuracy (staccuracy) is a framework for expressing
accuracy scores such that 50% represents a reference level of
performance and 100% is a perfect prediction. The
{staccuracy}
package provides tools for creating staccuracy
functions as well as some recommended staccuracy measures. It also
provides functions for some classic performance metrics such as mean
absolute error (MAE), root mean squared error (RMSE), and area under the
receiver operating characteristic curve (AUCROC), as well as their
winsorized versions when applicable.
You can install the official CRAN version of
{staccuracy}
:
install.packages('staccuracy')
The development version of {staccuracy}
is thoroughly
tested, but it might not be thoroughly documented. You can install it
like so:
# install.packages("pak")
::pak("tripartio/staccuracy") pak
The basic challenge that {staccuracy}
addresses is not
only to measure the accuracy of model predictions but to intuitively
indicate how relevant the accuracy scores are:
library(staccuracy)
#>
#> Attaching package: 'staccuracy'
#> The following object is masked from 'package:stats':
#>
#> mad
# Here's some data
<- c(2.3, 4.5, 1.8, 7.6, 3.2)
actual_1
# Here are some predictions of that data
<- c(2.5, 4.2, 1.9, 7.4, 3.0)
predicted_1
# Mean Absolute Error (MAE) measures the average error in the predictions
mae(actual_1, predicted_1)
#> [1] 0.2
# But how good is that?
# Mean Absolute Deviation (MAD) gives the natural variation in the actual data around the mean; this is a point of comparison for the MAE.
mad(actual_1)
#> [1] 1.736
# So, our predictions are better (lower) than the MAD, but how good, really?
# Create a standardized accuracy function to give us an easily interpretable metric:
<- staccuracy(mae, mad)
my_mae_vs_mad_sa my_mae_vs_mad_sa(actual_1, predicted_1)
#> [1] 0.9423963
# That's 94.2% standardized accuracy compared to the MAD. Pretty good!
# This and other convenient standardized accuracy scores are already built in
sa_mae_mad(actual_1, predicted_1) # staccuracy of MAE on MAD
#> [1] 0.9423963
sa_rmse_sd(actual_1, predicted_1) # staccuracy of RMSE on SD
#> [1] 0.95477
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.