Proseminar Self-Explanability of Software Systems: AI meets theoretical computer science
- Type: Proseminar (PS)
- Chair: KIT-Fakultäten - KIT-Fakultät für Informatik
- Semester: SS 2023
- Lecturer: Jun.-Prof. Dr. Maike Schwammberger
- SWS: 2
- Lv-No.: 2400143
Contents | Every year, more and more autonomous, sometimes self-learning, software systems are being used in many areas of daily life. In particular, more and more functions are being taken over by autonomous systems that were previously performed by humans, or functions are being performed jointly by the system and humans. In the case of semi-autonomous driving functions, for example, there are certain situations in which control of the car must be handed back to a human. In order to improve such cooperation between humans and systems, or to make it possible at all, the comprehensibility of increasingly complex systems is becoming more and more important. Such comprehensibility is made possible by the self-explanatory nature of the systems. Self-explanability describes various methods that justify the decisions and actions taken by the system. Different types of explanations must be provided to different recipients of explanations (e.g. end user vs. engineer/expert). Explainability is also essential to improve users' trust in autonomous systems. This proseminar covers a wide range of topics related to explainability: - Formal methods of explainability ("definition" of explanations) - Explainability for black-box systems (e.g. AI/ neural networks) - Different types of explanations (e.g. before something happens ("a priori"), while something is happening ("in situ"), or after something has happened ("a-posteriori"/ "forensic") |