Event
Talk on Causal Inference for Robust, Reliable, and Responsible NLP
Location
Date
Type
Titel
Causal Inference for Robust, Reliable, and Responsible NLP
Abstract
Despite the remarkable progress in large language models (LLMs), it is well-known that natural language processing (NLP) models tend to fit for spurious correlations, which can lead to unstable behavior under domain shifts or adversarial attacks. In my research, I develop a causal framework for robust and fair NLP, which investigates the alignment of the causality of human decision-making and model decision-making mechanisms. Under this framework, I develop a suite of stress tests for NLP models across various tasks, such as text classification, natural language inference, and math reasoning; and I propose to enhance robustness by aligning model learning direction with the underlying data generating direction. Using this causal inference framework, I also test the validity of causal and logical reasoning in models, with implications for fighting misinformation, and also extend the impact of NLP by applying it to analyze the causality behind social phenomena important for our society, such as causal analysis of policies, and measuring gender bias in our society. Together, I develop a roadmap towards socially responsible NLP by ensuring the reliability of models, and broadcasting its impact to various social applications.
Bio
Zhijing Jin (she/her) is an incoming Assistant Professor at the University of Toronto, and currently a postdoc at Max Planck Institute in Germany. She works on causal formulations of many NLP problems, AI Safety in Multi-Agent LLMs, and AI for Causal Science. She has received three Rising Star awards, two Best Paper awards at NeurIPS 2024 Workshops, two PhD Fellowships, and a postdoc fellowship. Her work has been published at many NLP and AI venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICLR, AAAI), and featured in MIT News and ACM TechNews. She co-organizes many workshops (e.g., NLP for Positive Impact Workshop at EMNLP 2024, and Causal Representation Learning Workshop at NeurIPS 2024), and leads the Tutorial on Causality for LLMs at NeurIPS 2024, and Tutorial on CausalNLP at EMNLP 2022. To support diversity, she organizes the ACL Year-Round Mentorship. More information can be found on her personal website: zhijing-jin.com
About the event
Lunch will be served at this meeting. After the talk, there will be an opportunity to meet with Zhijing Jin.
Please note that sign-up is required to attend.