A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods

Vilone, Giulia and Longo, Luca (2021) A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212

[thumbnail of pubmed-zip/versions/2/package-entries/frai-04-717899-r1/frai-04-717899.pdf] Text
pubmed-zip/versions/2/package-entries/frai-04-717899-r1/frai-04-717899.pdf - Published Version

Download (5MB)

Abstract

Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics. Eventually, the Friedman test was employed to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior. Findings demonstrate that these metrics do not provide sufficient evidence to identify superior methods over the others. However, when used together, these metrics form a tool, applicable to every rule-extraction method and machine-learned models, that is, suitable to highlight the strengths and weaknesses of the rule-extractors in various applications in an objective and straightforward manner, without any human interventions. Thus, they are capable of successfully modelling distinctively aspects of explainability, providing to researchers and practitioners vital insights on what a model has learned during its training process and how it makes its predictions.

Item Type: Article
Subjects: Academic Digital Library > Multidisciplinary
Depositing User: Unnamed user with email info@academicdigitallibrary.org
Date Deposited: 19 Dec 2022 12:50
Last Modified: 13 Sep 2023 08:12
URI: http://publications.article4sub.com/id/eprint/19

Actions (login required)

View Item
View Item