zum Inhalt springen

Welcome

Welcome to the Trustworthy Artificial Intelligence Lab (TAIL). Broadly speaking our research is about models and algorithms that are not only accurate or efficient, but also robust, uncertainty-aware, privacy-preserving, fair, and interpretable. In short our goal is to make models trustworthy. To increase trust we study how to provide guarantees, e.g. robustness certificates and conformal prediction sets. One focus area of our research is trustworthy graph-based models such as graph neural networks. We like graphs because graph data is everywhere: neural connections in the brain, social networks, interactions between proteins, molecules, code, the structure of web and much more.

News

  • Two papers were accepted at ICLR 2024: One on poisoning (Rethinking Label Poisoning for GNNs: Pitfalls and Attacks) and one on uncertainty (Conformal Inductive Graph Neural Networks).
  • Two papers were accepted at NeurIPS 2023: one on certificates (Hierarchical Randomized Smoothing) and one on GATs (Are GATs Out of Balance?).