Explainable AI and the challenge of complexity

Do you not only want to use artificial intelligence (AI) in your company, but also understand the results? Due to the high complexity of today's AI models and the resulting black box characteristic, it is essential to scrutinize and understand AI decisions. In the future, the ability to explain AI decisions in a comprehensible manner will not only become a competitive advantage, but also a legal obligation. In this workshop, you will learn about the possibilities and limitations of explainable artificial intelligence (XAI).

Objective

After the workshop, you will be able to assess your company's future legal obligations under the EU's AI Regulation. They understand what additional requirements need to be taken into account for transparent AI systems against this background. In addition, you know the added value of such AI systems and XAI methods for both software developers and users. Finally, you will understand the challenges that need to be considered when designing a self-explanatory AI system and learn how to overcome these challenges through a human-centered approach.

Content of the workshop

In this workshop you will receive an introduction to the topic of XAI. After presenting the basic problem that XAI is intended to solve, the benefits of XAI methods will be demonstrated. You will learn to what extent self-explanatory and transparent AI systems are mandatory for companies due to legal requirements such as the General Data Protection Regulation (GDPR) or the planned AI regulation of the European Union (EU AI Act). Finally, the possibilities and limitations of today's XAI methods are demonstrated using illustrative examples. You will learn about the challenges involved in designing an AI system that is comprehensible to humans. In this workshop, however, you will not only acquire knowledge, but also apply it directly to specific AI case studies. The result is an XAI canvas, which you can use to take the first steps in the human-centered design of an XAI system.

Target group

The workshop is aimed at people in companies who need to assess the benefits and the need for transparency of an AI system in order to decide on the use of XAI methods.
No prior technical knowledge of AI is required for the workshop.

Format

In presence (online possible on request)

Duration

120 minutes

Premises

Either at your premises, at the Karlsruhe University of Applied Sciences or at the Karlsruhe Institute of Technology.

Participants

4 - 16

Preparation

Preliminary questionnaire to identify relevant AI sample applications

Speakers

Maximilian Becker | Fraunhofer IOSB
Jutta Hild | Fraunhofer IOSB
Robin Weitemeyer | HKA (ILIN)

Price

Free of charge

Registration

Interested? Then contact us via kontakt@kompetenzzentrum-karl.de

Explainable AI and the challenge of complexity

Do you not only want to use artificial intelligence (AI) in your company, but also understand the results? Due to the high complexity of today's AI models and the resulting black box characteristic, it is essential to scrutinize and understand AI decisions. In the future, the ability to explain AI decisions in a comprehensible manner will not only become a competitive advantage, but also a legal obligation. In this workshop, you will learn about the possibilities and limitations of explainable artificial intelligence (XAI).

Objective

After the workshop, you will be able to assess your company's future legal obligations under the EU's AI Regulation. They understand what additional requirements need to be taken into account for transparent AI systems against this background. In addition, you know the added value of such AI systems and XAI methods for both software developers and users. Finally, you will understand the challenges that need to be considered when designing a self-explanatory AI system and learn how to overcome these challenges through a human-centered approach.

Content of the workshop

In this workshop you will receive an introduction to the topic of XAI. After presenting the basic problem that XAI is intended to solve, the benefits of XAI methods will be demonstrated. You will learn to what extent self-explanatory and transparent AI systems are mandatory for companies due to legal requirements such as the General Data Protection Regulation (GDPR) or the planned AI regulation of the European Union (EU AI Act). Finally, the possibilities and limitations of today's XAI methods are demonstrated using illustrative examples. You will learn about the challenges involved in designing an AI system that is comprehensible to humans. In this workshop, however, you will not only acquire knowledge, but also apply it directly to specific AI case studies. The result is an XAI canvas, which you can use to take the first steps in the human-centered design of an XAI system.

Target group

The workshop is aimed at people in companies who need to assess the benefits and the need for transparency of an AI system in order to decide on the use of XAI methods.
No prior technical knowledge of AI is required for the workshop.

Format

In presence (online possible on request)

Duration

120 minutes

Premises

Either at your premises, at the Karlsruhe University of Applied Sciences or at the Karlsruhe Institute of Technology.

Participants

4 - 16

Preparation

Preliminary questionnaire to identify relevant AI sample applications

Speakers

Maximilian Becker | Fraunhofer IOSB
Jutta Hild | Fraunhofer IOSB
Robin Weitemeyer | HKA (ILIN)

Price

Free of charge

Registration

Interested? Then contact us via kontakt@kompetenzzentrum-karl.de