|
Subject
Graph-based deep learning models, such as Graph Neural Networks (GNNs), or graph transformers have shown remarkable success in various domains, including social network analysis, drug discovery, and recommendation systems. However, their predictions often lack interpretability and are hard to visualize. The complexity of GNNs, combined with the non-Euclidean nature of graph data, poses unique challenges in visualizing predictions or explanations of the model decisions. This master thesis proposal aims to develop novel visualization tools to enhance the human interpretability of graph deep learning predictions. Specifically, it will explore interactive methods for visualizing node and edge predictions. The goal is to design a framework that provides visualizations of the model predictions.
Kind of work
The student will investigate visualization approaches for graph-based predictions. This includes: • Understanding GNN architectures and their training processes, including exploring explainability methods such as Grad-CAM, attention mechanisms, and graph saliency maps. • Developing interactive visualization tools to display model predictions and explanations. • Evaluating the effectiveness of these tools using humans. The project will employ real-world graph datasets from domains such as citation networks, molecular graphs, or geoscience. The visualization framework should be implemented in Python, using libraries such as PyTorch Geometric, NetworkX, and visualization tools like D3.js or Plotly.
Framework of the Thesis
• Graph Deep Learning https://distill.pub/2021/gnn-intro/ • Visualization for AI: https://neptune.ai/blog/deep-learning-visualization applied on graph structures
Number of Students
1
Expected Student Profile
• Strong knowledge of Machine Learning, AI, and deep learning. • Good understanding of graph structures and graph neural networks. • Experience in Python programming, with familiarity in PyTorch and visualization libraries. • Interest in explainable AI and human-centered AI design.
|
|