Understanding model interpretability in R with ggplot2 and mikropml (CC134)

August 5, 2021 • PD Schloss • 1 min read

The interpretability of a machine learning model tends to vary by the performance of the model. The need to interpret your model depends on what you hope to do with that model. In this Code Club, Pat shows how you can extract interpretability data from models created using mikropml and visualize the importance of features that are used in the model.

Pat will use functions from the mikropml R package and the ggplot2 and dplyr packages in RStudio.

Code

You can browse the state of the repository at the

Installations

If you haven’t been following along, you can get caught up by doing the following: