Leveraging HumanCentered Machine Learning to Create More Explainable Machine Learning Models for Temporal Data


Date: November 21, 2024


Presenter. Bahavathy Kathirgamanathan, visiting postdoc @Fraunhofer IAIS

Location: Officine Garibaldi


Abstract. Involving humans at every stage of developing a machine learning model is crucial for making AI systems more human-centric. Yet, it is still common to build ML models in a purely data-driven manner. Expert knowledge can help derive more expressive features than the original data which align better with the human mental model thus enhancing the interpretability and trustworthiness of the resulting ML models. We work to iteratively build machine learning models using visualisations to understand the overall phenomena and model properties. A particular focus is placed on time series data for which naturally explaining the features remains a challenge due to the nature of the data. Furthermore, explanations are often provided on time series in a manner that is only interpretable by those with expertise in reading time series data. Temporal abstraction techniques can be used to develop more domain meaningful features. Developing these more meaningful features from the temporal data helps improve the modelling as well as the explanation where domain meaningful concepts can be provided to the end user.