j

We are actively working on mlr3 as a successor of mlr. This implies that we have less time to reply to mlr issues.

### Installation

Release

install.packages("mlr")

Development

remotes::install_github("mlr-org/mlr")

# Citing mlr in publications

Please cite our JMLR paper [bibtex].

Some parts of the package were created as part of other publications. If you use these parts, please cite the relevant work appropriately. An overview of all mlr related publications can be found here.

# Introduction

R does not define a standardized interface for its machine-learning algorithms. Therefore, for any non-trivial experiments, you need to write lengthy, tedious and error-prone wrappers to call the different algorithms and unify their respective output.

Additionally you need to implement infrastructure to

• optimize hyperparameters
• select features
• cope with pre- and post-processing of data and compare models in a statistically meaningful way.

As this becomes computationally expensive, you might want to parallelize your experiments as well. This often forces users to make crummy trade-offs in their experiments due to time constraints or lacking expert programming skills.

mlr provides this infrastructure so that you can focus on your experiments! The framework provides supervised methods like classification, regression and survival analysis along with their corresponding evaluation and optimization methods, as well as unsupervised methods like clustering. It is written in a way that you can extend it yourself or deviate from the implemented convenience methods and construct your own complex experiments or algorithms.

Furthermore, the package is nicely connected to the OpenML R package and its online platform, which aims at supporting collaborative machine learning online and allows to easily share datasets as well as machine learning tasks, algorithms and experiments in order to support reproducible research.

# Features

• Clear S3 interface to R classification, regression, clustering and survival analysis methods
• Abstract description of learners and tasks by properties
• Convenience methods and generic building blocks for your machine learning experiments
• Resampling methods like bootstrapping, cross-validation and subsampling
• Extensive visualizations (e.g. ROC curves, predictions and partial predictions)
• Simplified benchmarking across data sets and learners
• Easy hyperparameter tuning using different optimization strategies, including potent configurators like
• iterated F-racing (irace)
• sequential model-based optimization
• Variable selection with filters and wrappers
• Nested resampling of models with tuning and feature selection
• Cost-sensitive learning, threshold tuning and imbalance correction
• Wrapper mechanism to extend learner functionality in complex ways
• Possibility to combine different processing steps to a complex data mining chain that can be jointly optimized
• OpenML connector for the Open Machine Learning server
• Built-in parallelization
• Detailed tutorial

# Miscellaneous

Simple usage questions are better suited at Stackoverflow using the mlr tag.

Please note that all of us work in academia and put a lot of work into this project - simply because we like it, not because we are paid for it.

New development efforts should go into mlr3. We have a developer guide and our own coding style which can easily applied by using the mlr_style from the styler package.

# mlr-tutorial

Please read here if you want to contribute to the Online Manual.