Exploring LLM Capabilities to Explain Decision Trees
2024
·
Publication
Research explores how Large Language Models can generate natural language explanations for decision tree predictions (Serafim et al., 2024), making tree-based reasoning accessible to non-expert users.
The study examines various textual representations and prompt engineering strategies, identifying strengths in LLMs as explainers while highlighting challenges in maintaining fidelity and coherence for complex tree structures.
This opens pathways for adaptive, user-friendly explanation generation that bridges formal decision logic with conversational interpretation.
References
2024
Exploring Large Language Models Capabilities to Explain Decision Trees
Paulo Bruno
Serafim, Pierluigi
Crescenzi, Gizem
Gezici, Eleonora
Cappuccio, Salvatore
Rinzivillo, and
1 more author
Decision trees are widely adopted in Machine Learning tasks due to their operation simplicity and interpretability aspects. However, following the decision process path taken by trees can be difficult in a complex scenario or in a case where a user has no familiarity with them. Prior research showed that converting outcomes to natural language is an accessible way to facilitate understanding for non-expert users in several tasks. More recently, there has been a growing effort to use Large Language Models (LLMs) as a tool for providing natural language texts. In this paper, we examine the proficiency of LLMs to explain decision tree predictions in simple terms through the generation of natural language explanations. By exploring different textual representations and prompt engineering strategies, we identify capabilities that strengthen LLMs as a competent explainer as well as highlight potential challenges and limitations, opening further research possibilities on natural language explanations for decision trees.
@inbook{SGC2024,author={Serafim, Paulo Bruno and Crescenzi, Pierluigi and Gezici, Gizem and Cappuccio, Eleonora and Rinzivillo, Salvatore and Giannotti, Fosca},booktitle={HHAI 2024: Hybrid Human AI Systems for the Social Good},doi={10.3233/faia240183},isbn={9781643685229},issn={1879-8314},line={1},month=jun,open_access={Gold},publisher={IOS Press},title={Exploring Large Language Models Capabilities to Explain Decision Trees},visible_on_website={YES},year={2024}}