Team: The initiators of these pages include (in alphabetical order):

Prof. Dr. Christoph Benzmüller (http://christoph-benzmueller.de, https://www.uni-bamberg.de/en/aise/team/benzmueller/)

Prof. Dr. Karl Hans Bläsius (www.hochschule-trier.de/informatik/blaesius/)

Prof. Dr. Otthein Herzog (https://user.informatik.uni-bremen.de/oherzog/)

Prof. Dr. Katharina Morik (https://www-ai.cs.tu-dortmund.de/PERSONAL/morik.html)

Prof. Dr. Stuart Russell (https://people.eecs.berkeley.edu/~russell/)

Prof. Dr. Jörg Siekmann (https://de.wikipedia.org/wiki/J%C3%B6rg_Siekmann , http://siekmann.dfki.de/de/home/)

Prof. Dr. Ipke Wachsmuth (https://www.techfak.uni-bielefeld.de/~ipke/)

We are professors and scientists in the field of artificial intelligence (AI) and run this website on a voluntary basis and without any sponsors or institutions with vested interests. We keep the costs low and do not need any funds (no collections, no appeals for donations).

We are not against AI research and development. On the contrary, we see the enormous positive potential that this technology has for life on our planet. But like any new technology, it is not without risks.

In this website, we want to limit ourselves to a few risks with potentially serious consequences, without any claim to completeness. Among other things, we regard possible interactions with nuclear weapons and the risk of accidental nuclear war. Such risks, also in connection with AI, are the topic of the pages unintended-nuclear-war.eu .

Our aim is to draw attention to risks and possible measures to reduce such risks. This concern is described at https://ki-folgen.de/en/unser-anliegen/.

The pages of ki-folgen.de also contain or refer to various articles (e.g. at https://ki-folgen.de/en/literatur/), whereby the content reflects the opinion of the authors stated in each case and does not necessarily correspond to the opinion of all persons named on this page (“about us”). For pages without author information, the author information given in the imprint applies.