back to list

Project: Interactive Visualization for Interpretable Machine Learning

Description

Master project with Dennis Collaris

With the availability of large amounts of data, Machine Learning is getting more and more relevant for businesses. It allows them to make sense of their data and make predictions about new unseen data. Lots of recent advancements in the field have led to exceptional performance on standard classification tasks. However, the most successful AI techniques in terms of predictive accuracy are usually applied in a black-box manner: only the input (data) and output (predictions) are considered; the inner workings of these models are considered too complex to understand.

This lack of transparency can be a major drawback for applications such as fraud detection, illness diagnosis, or bankruptcy prediction. In such cases, an explanation about the behavior of a model can be vital in establishing trust in uncertain predictions. Explanations allow us to qualitatively ascertain whether desiderata such as fairness, privacy, reliability, robustness, causality, usability, and trust are met.

My research focuses on enabling experts to understand predictions of complex machine learning models. We achieve this by developing various visualization techniques that combine the recent advancements of model explanation techniques, with high-level visual analytics solutions.

For more information about my work, check out https://explaining.ml.

Details
Supervisor
Dennis Collaris