An Actionability Assessment Tool for Explainable AI

Clicks: 9
ID: 282424
2024
In this paper, we introduce and evaluate a tool for researchers and practitioners to assess the actionability of information provided to users to support algorithmic recourse. While there are clear benefits of recourse from the user's perspective, the notion of actionability in explainable AI research remains vague, and claims of `actionable' explainability techniques are based on the researchers' intuition. Inspired by definitions and instruments for assessing actionability in other domains, we construct a seven-question tool and evaluate its effectiveness through two user studies. We show that the tool discriminates actionability across explanation types and that the distinctions align with human judgements. We show the impact of context on actionability assessments, suggesting that domain-specific tool adaptations may foster more human-centred algorithmic systems. This is a significant step forward for research and practices into actionable explainability and algorithmic recourse, providing the first clear human-centred definition and tool for assessing actionability in explainable AI.
Reference Key
dourish2024an Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Ronal Singh; Tim Miller; Liz Sonenberg; Eduardo Velloso; Frank Vetere; Piers Howe; Paul Dourish
Journal arXiv
Year 2024
DOI DOI not found
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.