Both LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) have become standard tools in the AI explainability toolkit. Both produce local explanations — explanations for individual predictions rather than the model as a whole. Both are model-agnostic, meaning they work with any underlying model architecture. But they work very differently, have different theoretical properties, and are better suited to different use cases.
How LIME Works
LIME generates explanations by constructing a local linear approximation of the model's behavior around a specific prediction. It does this by perturbing the input, observing how the model's output changes, and fitting a simple linear model to these input-output pairs. The coefficients of the linear model are the explanation: they indicate which features pushed the prediction in which direction. LIME is fast, intuitive, and works with any modality — tabular data, text, and images.
How SHAP Works
SHAP is grounded in cooperative game theory. It computes the Shapley value for each feature — the average marginal contribution of that feature across all possible subsets of features. SHAP satisfies several desirable mathematical properties that LIME does not guarantee: efficiency (feature contributions sum to the prediction), consistency (changing a model never decreases a feature's importance if it was already contributing more), and dummy (irrelevant features receive zero attribution). These properties make SHAP explanations theoretically sound and comparable across predictions.
When to Use LIME
LIME is preferable when you need fast, cheap explanations for high-throughput systems; when you are working with image or text data where SHAP's computational cost is prohibitive; and when your audience is non-technical and the approximate nature of LIME explanations is acceptable. LIME's flexibility makes it a good choice for exploratory analysis and debugging.
When to Use SHAP
SHAP is preferable when you need explanations that will be reviewed by regulators or legal counsel; when you need to compare feature importance across many predictions or aggregate explanations across a population; when you are using tree-based models (TreeSHAP makes SHAP computation as fast as LIME for these models); and when you need explanation consistency — the guarantee that your explanation accurately reflects the model's behavior. For compliance use cases, SHAP's mathematical guarantees make it the preferred choice.
AIClarum's Recommendation
AIClarum uses TreeSHAP for tree-based models (where it is both faster than LIME and more accurate), SHAP with background sampling for neural networks (balancing accuracy and cost), and LIME as a fast fallback for very high-throughput scenarios where approximate explanations are acceptable. Our platform stores all explanation methods' outputs and can produce either format on demand for different regulatory and audit contexts.
