Proseminar Self-Explanatory Software Systems: AI meets Theoretical Computer Science

  • Type: Proseminar (PS)
  • Chair: KIT-Fakultäten - KIT-Fakultät für Informatik
  • Semester: SS 2023
  • Lecturer: Jun.-Prof. Dr. Maike Schwammberger
  • SWS: 2
  • Lv-No.: 2400143
Content

Every year more and more autonomous, partly self-learning, software systems are used in many application areas of daily life. In particular, more and more functions are being taken over by autonomous systems that were previously performed by humans, or functions are being performed jointly by the system and humans. In semi-autonomous driving functions, for example, there are certain situations in which control of the car must be handed back to a human.

In order to improve such cooperation between human and system, or to enable it at all, the understandability of increasingly complex systems is becoming more and more important. Such understandability is made possible by the self-explainability of the systems. Self-explainability circumscribes different methods that justify decisions and actions taken by the system. Here, different types of explanations must be delivered to different recipients of explanations (e.g., End-User:in vs. Engineer:in/ Expert:in). Explainability is also essential to improve the trust of users in autonomous systems.

This proseminar covers a wide range of topics related to explainability:

- Formal methods of explainability ("definition" of explanations)

- Explanability for black-box systems (e.g., AI/ neural networks).

- Different types of explanations (e.g., before something happens ("a priori"), while something happens ("in situ"), or after something happened ("a-posteriori"/ "forensic")