Collaboratory
Signals and decoding
Led by
- 07
Team members
- 05
Upcoming events
One of the deepest problems in cognitive science is how to make sense of the vast amount of raw data constantly bombarding us from the environment. The key to this is sorting of the input. Attention is a basic perceptual mechanism for selective decoding of complex signals. AI can help in the attention and decoding of perceptual signals, and can be used for making sense of the signals from a processing brain.
Based on statistical modeling of signal processing pipelines and large scale experimental approaches this collaboratory will make foundational contributions to three of the centre’s basic research:
Explainability: We propose explainability methods to develop interactive systems for prediction of response to real-time intervention in bio-medical systems.
Self-supervised learning: New tools for deep learning in highly non-stationary domains based on self-supervised ensembles. Quantification of epistemic uncertainty after self-supervised learning.
Novelty detection: Analysis of multi-level novelty detection in large-scale deployment of bio-medical deep learning systems. Design, modeling and evaluation of robust dynamical systems in domains with strong anomalies. Explainability methods for deep outlier detection.
Our People
Technical University of Denmark
Chun Kit Wong
PhD studentPioneer Centre for AI (P1), Aalborg University
Gustav Wagner Zakarias
PhD FellowTechnical University of Denmark
Kazu Fukuda (Ghalamkari)
PostdocLars Kai Hansen
P1 Collaboratory Co-Lead and ProfessorUiT The Arctic University of Norway, Visual Intelligence
Robert Jenssen
ProfessorAalborg University
Sarthak Yadav
PhD FellowUniversity of Copenhagen, European Laboratory for Learning and Intelligent Systems (ELLIS), Danish Data Science Academy (DDSA)
Sebastian Weichwald
Tenure-track Assistant Professor