Proseminar: Moral Autonomous Agents

  • Type: Proseminar (PS)
  • Chair: KIT-Fakultäten - KIT-Fakultät für Informatik - Fakultätseinrichtungen - Geschäftsführung
  • Semester: WS 23/24
  • Time: Do 26.10.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)


    Do 02.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 09.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 16.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 23.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 30.11.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 07.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 14.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 21.12.2023
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 11.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 18.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 25.01.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 01.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 08.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)

    Do 15.02.2024
    11:30 - 13:00, wöchentlich
    50.28 Seminarraum 1
    50.28 InformatiKOM II (0)


  • Lecturer: Jun.-Prof. Dr. Maike Schwammberger
  • SWS: 2 SWS
  • Lv-No.: 2400193
  • Information:

    Presence

Content

Every year, more and more autonomous, intelligent, software systems are used in many application areas of daily life. These autonomous systems are often referred to as "autonomous agents". In currently deployed systems, a human is often still responsible when the autonomous agent encounters a critical situation. With autonomous driving functions, for example, the driver must usually be able to take control of the vehicle at any time.
For a future with fully autonomous agents, these agents must be enabled to make such decisions autonomously. This means transferring the gut feeling that we humans often use to make decisions in spontaneous situations to autonomous agents. This essentially means that an autonomous agent should be enabled to a certain extent to weigh moral rules against each other. For example, in order to let an ambulance pass, it is allowed to swerve onto a grass verge, even if this is normally forbidden by a rule.
In order to develop moral autonomous agents, there are several aspects to consider, which will be discussed in this proseminar

  • Definition of morality: What kind of moral theory should the agent follow? That of the country of deployment or that of the country of manufacture?
  • Definitions of autonomous agents (e.g. Belief Desire Intention Agents)
  • Game theory as a possible approach to cooperative actions by multiple Autonomous Agents (e.g., Public Good Games).
  • Limits of moral agents: In what critical situations must an agent make moral decisions and when must a human continue to be consulted?
  • How should autonomous agents be treated in jurisprudence?
  • "unconscious/cultural bias": an autonomous agent must not be biased against a certain group of people
  • Benefits and opportunities, but also problems and risks in the use of self-learning systems

Procedure of the Proseminar:
In the first week of the lecture there will be a meeting where the selectable topics will be presented. Sufficient topics will be provided to ensure a fair choice of topics. The presentations will be given in a block seminar at the end of the lecture period and a paper of about 5 pages will be handed in before the block seminar.

Lecture languageGerman/English