Member-only story
Deep Dive on Accumulated Local Effect Plots (ALEs) with Python
Intuition, algorithm and code for using ALEs to explain machine learning models

Highly correlated features can wreak havoc on your model interpretations. They violate the assumptions of many XAI methods and make it difficult to understand the nature of a feature’s relationship with the target. At the same time, it is not always possible to remove them without affecting performance. We need a method that can provide clear interpretations, even with multicollinearity. Thankfully we can rely on ALEs [1].
ALEs are a global interpretation method. Like PDPs they show the trends captured by the model. That is if a feature has a linear, non-linear or no relationship with the target variable. However, we will see that the method of identifying these trends is quite different. We will:
- Give you the intuition for how ALEs are created.
- Formally define the algorithm used to create ALEs.
- Apply ALEs using the Alibi Explain package.
We will see that, unlike other XAI methods like SHAP, LIME, ICE Plots and Friedman’s H-stat, ALEs give interpretations that are robust to multicollinearity.
You may also enjoy this video on the topic. And, if you want to learn more, check out my course — XAI with Python. You can get free access if you sign up to my newsletter.
Understanding ALEs
We will use the abalone dataset to understand how ALEs work. Abalone is a shellfish delicacy. We want to predict the number of rings in their shell using features like shell weight and shucked weight (weight of the meat). Figure 1 shows the correlation heatmap for all the numerical features in this dataset. We are dealing with some highly correlated features!