skip to content

Trustworthy Artificial Intelligence Lab

Welcome to the Trustworthy Artificial Intelligence Lab (TAIL). Broadly speaking our research is about models and algorithms that are not only accurate or efficient, but also robust, uncertainty-aware, privacy-preserving, fair, and interpretable. In short our goal is to make machine learning models trustworthy. To increase trust we study how to provide guarantees, e.g. robustness certificates and conformal prediction sets.

One focus area of our research is trustworthy graph-based models such as graph neural networks. We like graphs because graph data is everywhere: neural connections in the brain, social networks, interactions between proteins, molecules, code, the structure of web and much more.

News

  • Together with Christian Sohler (UoC), Michael Shaub (RWTH Aachen), and Christopher Morris (RWTH Aachen) we organised the first seminar in a series on “Next Generation Graph Neural Networks” in Cologne.
  • Our paper "Robust Yet Efficient Conformal Prediction Sets" was accepted at ICML 2024.
  • Two papers were accepted at ICLR 2024: One on poisoning (Rethinking Label Poisoning for GNNs: Pitfalls and Attacks) and one on uncertainty (Conformal Inductive Graph Neural Networks).
  • Two papers were accepted at NeurIPS 2023: one on certificates (Hierarchical Randomized Smoothing) and one on GATs (Are GATs Out of Balance?).