The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This package includes a collection of methods to create models for semi-supervised learning (e.g. fitting the model, making predictions, etc), with a fairly intuitive interface that is easy to use.
In Model list section
you can see the list of different classification, regression and clustering models.
Current packages to do semi-supervised learning do not use an intuitive interface. In this package, trying to use semi-supervised learning in an easy and intuitive way.
SSLR
tries to solve this by providing an interface to use different models, mainly using the parsnip model interface to make the use of this package easier.
SSLR
connects with parsnip to create different models without using too many arguments in the fit functions.
In addition, it uses other packages such as RSSL
to use the same interface in an easy way.
For example, to use different ones like RSSL
. It has a different interface. Thanks to SSLR you can use different options to use its fit functions.
To fit the model (for example SelfTraining), you must:
fit
with formula, fit_xy
with x and y, or fit_x_u
with x and unlabeled data. See Model fitting section
.For example, with fit
function:
<- rand_forest(trees = 100, mode = "classification") %>%
rf set_engine("randomForest")
<- selfTraining(learner = rf) %>% fit(Wine ~ ., data = train) m
Or with fit_xy
function:
<- rand_forest(trees = 100, mode = "classification") %>%
rf set_engine("randomForest")
<- selfTraining(learner = rf) %>% fit_xy(x = train[,-cls], y = train$Wine) m
This uses the parsnip
package that has an intuitive interface to create a Random Forest model and this can be used in the SSLR
package in a simple way.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.