Motivation, goals and initiators of this page: about us  

Chances and risks

Most AI applications are positive and have led to an improvement in the quality of human life. However, there are also critical applications that one should be aware of in order to minimize risks. The opportunities of AI are manifold and are discussed in detail in many sources (see e.g., The purpose of these pages is also to point out possible serious risks. Here, only individual focal points are set without any claim to completeness.

Warning against AI

In late March 2023, FLI published an open letter warning of AI risks and calling for a six-month pause on certain AI developments. On May 30, a “one-sentence statement” was published warning of human extinction from AI. Notes on these warnings.

Serious risks from AI

The “one sentence” of the statement does not say what kind of risks could be involved and further publications indicate that the emergence of a superintelligence is considered particularly dangerous. This is understood to mean a system that clearly exceeds human intelligence in almost all areas.

However, AI successes may also have other risks, such as

  • development of autonomous weapon aystems,
  • incalculable interactions between AI and nuclear weapons,
  • revolution of warfare through AI,
  • biological and chemical weapons developed with the help of AI,
  • disinformation, deep fakes,
  • information dominance and manipulation on the internet.

These risks are covered in one article “Ist die Künstliche Intelligenz gefährlich?” (currently only available in German).

Not all of these risks pose a threat to all of humanity, as expressed in the One Sentence Statement. However, such a threat may exist if a superintelligence is successfully developed or if nuclear weapons are used, possibly accidentally in connection with AI decisions. A pandemic based on bioweapons could also reach extreme proportions. The risks of nuclear war by accident, including in connection with AI are addressed here.


In the warning of 30.5.2023, the signatories also had a possible superintelligence in mind. Many leading AI scientists believe that an “Artificial General Intelligence” (AGI) will emerge within the next 10 years, whereby the human level will be reached in many areas. Some companies are currently working on the creation of superintelligence at great expense.

The topic of superintelligence is covered in a number of important books by researchers in this field:

Information dominance and manipulation

Preliminary stages of a superintelligence with great capabilities in linguistic communication, as well as planning of actions, may be achieved very soon, and with the help of or through such systems, information dominance could be achieved on the Internet, which would make our societal systems unstable, perhaps even putting nuclear weapon states in existential trouble. This would significantly increase the risk of nuclear war. more …

Is AI more dangerous than nuclear weapons?

In the past, prominent figures have claimed AI is more dangerous than nuclear weapons. This comparison referred in particular to the risk of a possible superintelligence. more …

Interaction of various risks and possible measures

In the next few years, the current course of political confrontation will lead to a level of AI and the associated weapon systems that will hardly be controllable for humans. In particular, the various risks (autonomous weapons, nuclear weapons, uncontrollable AI systems) may also interact and thus intensify.

When considering the possible risks, the question also arises as to how these risks can be reduced. more …   

Translated with DeepL