Proseminar: Moral Autonomous Agents

  • Type: Proseminar (PS)
  • Chair: KIT-Fakultäten - KIT-Fakultät für Informatik - Fakultätseinrichtungen - Geschäftsführung
  • Semester: WS 23/24
  • Time: Do 26.10.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)


    Do 02.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 09.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 16.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 23.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 30.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 07.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 14.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 21.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 11.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 18.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 25.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 01.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 08.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 15.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)


  • Lecturer: Jun.-Prof. Dr. Maike Schwammberger
  • SWS: 2 SWS
  • Lv-No.: 2400193
  • Information:

    Presence

Contents

Every year, more and more autonomous, intelligent software systems are being used in many areas of daily life. These autonomous systems are often referred to as "autonomous agents". In systems currently in use, a human is often still responsible if the autonomous agent gets into a critical situation. With autonomous driving functions, for example, the driver must generally be able to take control of the vehicle at all times.
For a future with fully autonomous agents, these agents must be enabled to make such decisions autonomously. This means that we must transfer the gut feeling with which we humans often make decisions in spontaneous situations to autonomous agents. This essentially means that an autonomous agent should be enabled to a certain extent to weigh up moral rules against each other. For example, in order to let an ambulance pass, it is permitted to swerve onto a grass verge, even if this is normally prohibited by a rule.
In order to develop moral autonomous agents, there are various aspects to consider, which will be discussed in this proseminar

  • Definition of morality: What kind of moral theory should the agent follow? That of the country of deployment or that of the country of production?
  • Definitions of autonomous agents (e.g. Belief Desire Intention Agents)
  • Game theory as a possible approach for cooperative actions of several autonomous agents (e.g. public good games)
  • Limits of moral agents: In which critical situations must an agent decide morally and when must a human continue to be consulted?
  • How should autonomous agents be treated in jurisdiction?
  • "unconscious/cultural bias": An autonomous agent must not be biased towards a certain group of people
  • Benefits and opportunities, but also problems and risks in the use of self-learning systems

Course of the proseminar:
In the first week of lectures, there will be a meeting at which the selectable topics will be presented. A sufficient number of topics will be provided to ensure a fair choice of topics. The presentations will be held in a block seminar at the end of the lecture period and a 5-page paper will be handed in before the block seminar.

Lecture languageGerman/English