Explainable autonomous agents: Transparent and trustworthy?

Contents

Autonomous systems are being used in more and more areas of daily life. These autonomous systems are often referred to as "autonomous agents". Examples include driver assistance systems and smart home systems. These autonomous agents have to make human-understandable decisions, act together with humans or act in areas where humans are present. Explanations are necessary to enable this cooperation. To ensure the acceptance and effectiveness of an explanation, explanations must be adapted to different target groups, times and explanation goals.

In order to develop explainable autonomous agents, there are various aspects to consider, which will be discussed in this proseminar:

  • Terminology: what constitutes an explanation? When is an explanation "good enough" for the user? When is transparency sufficient, and when is an explanation necessary? Are explanations only a form of justification?

  • Dimensions of explanations: How do users perceive explanations and how do they evaluate them? Who controls explanations and who benefits from them? Are there also negative effects of explanations?

Course of the proseminar:

In the first week of lectures, there will be a meeting at which the selectable topics will be presented. A sufficient number of topics will be provided to ensure a fair choice of topics. The presentations will be held in a block seminar at the end of the lecture period and a 5-page paper will be handed in before the block seminar.

Lecture languageGerman/English
Organizational matters

In the first week of lectures there will be a meeting at which the selectable topics will be presented. A sufficient number of topics will be provided to ensure a fair choice of topics. The presentations will be held in a block seminar at the end of the lecture period and a 5-page paper will be handed in before the block seminar.