Exploring the Balance between Interpretability and Performance with Carefully Designed Constrainable Neural Additive Models

19 Pages Posted: 7 Dec 2022

See all articles by Ettore Mariotti

Ettore Mariotti

Universidade de Santiago de Compostela

Albert Gatt

Utrecht University

Jose Maria Alonso Moral

Universidade de Santiago de Compostela

Abstract

The interpretability of an intelligent model automatically derived from data is a property that can be acted upon with a set of structural constraints that such a model should adhere to. Often these are in contrast with the task objective and it is not straightforward how to explore the balance between model interpretability and performance. In order to allow an interested user to jointly optimise performance and interpretability, we propose a new formulation of Neural Additive Models (NAM) which can be subject to a number of constraints. Accordingly, our approach produces a new model that is called Constrainable NAM (or just CNAM in short) and it allows the specification of different regularisation terms. CNAM is differentiable and is built in such a way that it can be initialised as a solution of an efficient tree-based GAM solver (e.g., Explainable Boosting Machines). From this local optimum the model can then explore solutions with different interpretability-performance tradeoffs according to different definitions of both interpretability and performance. We empirically benchmark the model on 56 datasets against 12 models and observe that on average the proposed CNAM model ranks on the Pareto front of optimal solutions, i.e., models generated by CNAM exhibit a good balance between interpretability and performance.Moreover, we provide readers with two illustrative examples which are aimed to show step by step how CNAM works well for solving classification tasks, but also how it can yield insights when considering regression tasks.

Keywords: Generalised Additive Models, explainable artificial intelligence, Interpretable Modeling, Neural Additive Models, interpretability, Explainability

Suggested Citation

Mariotti, Ettore and Gatt, Albert and Alonso Moral, Jose Maria, Exploring the Balance between Interpretability and Performance with Carefully Designed Constrainable Neural Additive Models. Available at SSRN: https://ssrn.com/abstract=4288260 or http://dx.doi.org/10.2139/ssrn.4288260

Ettore Mariotti (Contact Author)

Universidade de Santiago de Compostela ( email )

Complejo Docente - Campus Universitario de Lugo
Lugo, 15704
Spain

Albert Gatt

Utrecht University ( email )

Vredenburg 138
Utrecht, 3511 BG
Netherlands

Jose Maria Alonso Moral

Universidade de Santiago de Compostela ( email )

Complejo Docente - Campus Universitario de Lugo
Lugo, 15704
Spain

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
35
Abstract Views
292
PlumX Metrics