Exploring LLM Capabilities to Explain Decision Trees



img Exploring LLM Capabilities to Explain Decision Trees

Research explores how Large Language Models can generate natural language explanations for decision tree predictions (Serafim et al., 2024), making tree-based reasoning accessible to non-expert users.

The study examines various textual representations and prompt engineering strategies, identifying strengths in LLMs as explainers while highlighting challenges in maintaining fidelity and coherence for complex tree structures.

This opens pathways for adaptive, user-friendly explanation generation that bridges formal decision logic with conversational interpretation.


References

2024

  1. Exploring Large Language Models Capabilities to Explain Decision Trees
    Paulo Bruno Serafim, Pierluigi Crescenzi, Gizem Gezici, Eleonora Cappuccio, Salvatore Rinzivillo, and 1 more author
    Jun 2024
    RESEARCH LINE