Christoph Becker

FZI Research Center for Information Technology
christoph.becker@fzi.de

Dr. Bettina-Johanna Krings

Institute for Technology Assessment and Systems Analysis
bettina-johanna.krings@kit.edu

Ethics and law

Ethical, legal and social aspects (ELSA) form the social framework for the use of artificial intelligence (AI). KARL examines these aspects from various perspectives and presents them. Methods for future systematic considerations of ELSA are also being developed. It also examines how these methods can be integrated into software engineering process models.

Relevance

Reflecting on ELSA is an important step in the design of good technology in general and good AI systems in particular. Their systematic consideration is necessary in order to be able to create and introduce human-centered AI systems. However, the complexity of a comprehensive ELSA analysis is currently too high for many business projects. For companies, it is important to prepare typical legal issues relating to the use of AI with reasonable effort in such a way that they are also comprehensible for legal laypersons and point out possible solutions. This helps to raise awareness of the legal and ethical issues surrounding the use of AI among those responsible in companies.

Objective

KARL would like to develop and evaluate a guideline for companies, decision-makers and designers at different company levels and then make it freely available. These guidelines are intended to reduce the complexity for project managers in companies and enable the relevant target groups to deal with the following issues independently:
- How can basic ethical values be identified for a use case in your own company?
- How can basic ethical values for AI systems be realized?
- Who is liable for damages (e.g. a discriminatory AI decision) in the event of an individual claim?
- How can the tension between data protection and the data requirements of AI systems (learning from personal training data) be resolved in a legally compliant manner?
- How can discrimination by AI be avoided with regard to the selection of training data and the use of algorithms?
- How should the license and usage rights for AI be structured? How are the foundations and results of AI to be legally classified (e.g. authorizations to use data and databases, intellectual property in works by AI against the background of copyright law, trade secrets)?
- Should the legal system, for example be extended by an electronic person? If so, how should the system be further developed?

A participation model is to be developed in KARL for social partnership design. The results should help to raise awareness of ELSA and at the same time provide support for the processing of ELSA.

Handouts for companies

- ELSA guidelines for companies (assistance for the consideration of ethical, legal and social aspects in the design of AI systems and their introduction)
- Workshop on the participation of different stakeholders in ELSA processes
- Ethical, legal and social aspects in IT projects
- Why ethics in KARL?
- Legal aspects in KARL
- The "Moderated specification dialog"

Ethics and law

Ethical, legal and social aspects (ELSA) form the social framework for the use of artificial intelligence (AI). KARL examines these aspects from various perspectives and presents them. Methods for future systematic considerations of ELSA are also being developed. It also examines how these methods can be integrated into software engineering process models.

Relevance

Reflecting on ELSA is an important step in the design of good technology in general and good AI systems in particular. Their systematic consideration is necessary in order to be able to create and introduce human-centered AI systems. However, the complexity of a comprehensive ELSA analysis is currently too high for many business projects. For companies, it is important to prepare typical legal issues relating to the use of AI with reasonable effort in such a way that they are also comprehensible for legal laypersons and point out possible solutions. This helps to raise awareness of the legal and ethical issues surrounding the use of AI among those responsible in companies.

Objective

KARL would like to develop and evaluate a guideline for companies, decision-makers and designers at different company levels and then make it freely available. These guidelines are intended to reduce the complexity for project managers in companies and enable the relevant target groups to deal with the following issues independently:
- How can basic ethical values be identified for a use case in your own company?
- How can basic ethical values for AI systems be realized?
- Who is liable for damages (e.g. a discriminatory AI decision) in the event of an individual claim?
- How can the tension between data protection and the data requirements of AI systems (learning from personal training data) be resolved in a legally compliant manner?
- How can discrimination by AI be avoided with regard to the selection of training data and the use of algorithms?
- How should the license and usage rights for AI be structured? How are the foundations and results of AI to be legally classified (e.g. authorizations to use data and databases, intellectual property in works by AI against the background of copyright law, trade secrets)?
- Should the legal system, for example be extended by an electronic person? If so, how should the system be further developed?

A participation model is to be developed in KARL for social partnership design. The results should help to raise awareness of ELSA and at the same time provide support for the processing of ELSA.

Handouts for companies

- ELSA guidelines for companies (assistance for the consideration of ethical, legal and social aspects in the design of AI systems and their introduction)
- Workshop on the participation of different stakeholders in ELSA processes
- Ethical, legal and social aspects in IT projects
- Why ethics in KARL?
- Legal aspects in KARL
- The "Moderated specification dialog"

Christoph Becker

FZI Research Center for Information Technology
christoph.becker@fzi.de

Dr. Bettina-Johanna Krings

Institute for Technology Assessment and Systems Analysis
bettina-johanna.krings@kit.edu