skip to content

SHINE LAB: Smart Homes with Intelligent and Explainable Features

We are a research lab focused on the future of ubiquitous smart environments, homes that are intelligent, adaptive, and, most importantly, explainable. Our mission is to empower users by making complex smart systems understandable, transparent, and trustworthy

Welcome to SHINE Lab

The Why Behind SHINE

We imagine a future where smart homes not only react to events, but also reason about them.

These environments understand user routines, support decision-making, and explain their actions like thoughtful partners rather than opaque systems.

At SHINE LAB, we recognize that intelligence alone is not sufficient. As smart environments become more advanced, they must also become transparent, human-aware, and explainable.

Our research is dedicated to addressing the complexity of smart environments by designing systems that are:

  • Intelligent, capable of automating and anticipating everyday needs
  • Context-aware, able to sense and adapt to dynamic conditions
  • Explainable, offering clear and tailored insights into system behavior

We pursue cutting-edge research in cyber-physical systems, machine learning, and human-computer interaction. Our objectives are to:

  • Develop IoT systems that users can understand, trust, and control
  • Create explanation mechanisms that communicate using human-centered language
  • Help users build accurate mental models that lead to more confident and effective interactions

Whether it’s a smart home that provides feedback during an energy peak, a robot that justifies its navigation choices, or a system that explains why your coffee machine activated early, Our goal is to create environments that are not only technically sophisticated but also understandable and meaningful to their users.

Research Areas

Our research covers both foundational and applied aspects of intelligent systems for smart environments. We focus on the following areas:

  • Context-Aware and Adaptive Smart Homes

    Designing systems that respond intelligently to dynamic user needs and environmental conditions.

  • Context-Aware and Human-Centered Explanation

    Developing explanation techniques that are sensitive to user context and cognitive models, enhancing understanding and trust.

  • IoT System Design and Prototyping

    Building and testing smart home applications using a wide range of sensors, platforms, and embedded devices.

  • Applied Machine Learning for Smart Environments

    Leveraging data-driven models to support decision-making, automation, and personalization in everyday settings.

  • Explainable Artificial Intelligence (XAI) in Cyber-Physical Systems

    Investigating methods to make AI behavior in complex systems transparent and interpretable for end users.

  • Ethics, Transparency, and Responsible Interaction in Ubiquitous Systems

    Addressing ethical considerations, responsible AI behavior, and fostering user trust through system openness and clarity.

Publications

Sadeghi M, Herbold L, Unterbusch M, Vogelsang A (2024), “Smartex: A framework for generating user-centric explanations in smart environments, In 2024 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 106–113. IEEE. [URL]


Sadeghi M, Pöttgen D, Ebel P, Vogelsang A (2024), “Explaining the unexplainable: The impact of misleading explanations on trust in unreliable predictions for hardly assessable tasks, In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, pp. 36–46. [URL Best Paper Award – UMAP 2024 


Herbold L, Sadeghi M, Vogelsang A (2024), Generating context-aware contrastive explanations in rule-based systems, In Proceedings of the 2024 Workshop on Explainability Engineering, pp. 8–14. [URL]


Sadeghi M, Klös V, Vogelsang A (2021), Cases for explainable software systems: Characteristics and examples, In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 181–187. IEEE. [URL]