Event
Last Fridays Talks: Causality and Explainability
Location
Date
Type
Organizer
Last Fridays Talks
Each last-Friday-of-the-month, we are hosting the Last Fridays Talks, where one of our seven Collaboratories will present insights from their current work. Join us for a discussion on results and recent papers, followed by some socializing afterwards for everyone who wish to attend.
Talk 1
Aligning AI with Humans: Exploring Representation Spaces
Abstract
The integration of machine learning into everyday life is accelerating, yet ensuring these systems are safe, fair, and aligned with human values remains a challenge. In this talk, I will discuss how concepts can be used to evaluate alignment between human and machine representations from different perspectives. Humans here are represented by user studies (such as the odd-one-out task), knowledge graphs (like Wikidata), and findings from cognitive science (focusing on the convexity of concepts). These approaches help us understand whether machines “comprehend” the world in ways similar to humans.
Speaker
Lenka Tětková is a postdoc at the Section for Cognitive Systems, DTU Compute. Her research interests include concept-based explainability, exploring representations in hidden layers, human-machine alignment, and translating theory from cognitive science into the context of machine learning. She recently defended her PhD at DTU Compute, under the supervision of Professor Lars Kai Hansen. The focus of her PhD was enhancing and explaining AI with a focus on biological data.
Talk 2
A gadjid for CausalDisco: Metrics to Advance Causal Structure Learning
Abstract
In this talk, I will present gadjid, a novel framework for developing and implementing causal distances between graphs. These distances yield new success metrics for the causal structure learning task. Causal structure learning is a difficult task that has arguably not yet seen a real breakthrough. Our recent work highlights shortcomings in current benchmarking practices, which may hinder progress. To address these shortcomings, we provide baseline methods in our toolbox CausalDisco and the new adjustment identification distances developed in gadjid. After outlining the motivation for our research, I will illustrate how naive graph distances fail to reflect causal implications and sketch the algorithmic contributions that enabled us to reduce the computational complexity of computing our causal graph distances, in some cases from hours to milliseconds.
Bio
Sebastian Weichwald is an Associate Professor at the Copenhagen Causality Lab and the Department of Mathematical Sciences, University of
Copenhagen. At the Pioneer Centre for AI, he co-leads efforts in the causality and explainability collaboratory. Sebastian specializes in pragmatic causal modeling, aiming to bridge the gap between statistical causal inference and its practical applications. One example of this is the solutions with which his team won the NeurIPS Causality 4 Climate competition. Sebastian’s research also focuses on the conceptual foundations of causal representation learning and causal discovery.
Sign-up to join in person or online.