Motivation, goals and initiators of this page: about us  

Chances and risks

Most AI applications are positive and have led to an improvement in the quality of human life. However, there are also critical applications that one should be aware of in order to minimize risks. The opportunities of AI are manifold and are discussed in detail in many sources (see e.g., The purpose of these pages is also to point out possible serious risks. Here, only individual focal points are set without any claim to completeness.

Warning against AI

In late March 2023, FLI published an open letter warning of AI risks and calling for a six-month pause on certain AI developments. On May 30, a “one-sentence statement” was published warning of human extinction from AI. Notes on these warnings.

Serious risks from AI

The “one sentence” of the statement does not say what kind of risks could be involved and further publications indicate that the emergence of a superintelligence is considered particularly dangerous. This is understood to mean a system that clearly exceeds human intelligence in almost all areas.

However, AI successes may also have other risks, such as

  • Development of Autonomous Weapon Systems,
  • Incalculable interactions between AI and nuclear weapons,
  • Revolution of warfare through AI,
  • bio- and chemical weapons developed with the help of AI,
  • and information dominance and manipulation on the Internet.

These risks are covered in one article “Ist die Künstliche Intelligenz gefährlich?” (currently only available in German).

Not all of these risks pose a threat to all of humanity, as expressed in the One Sentence Statement. However, such a threat may exist if a superintelligence is successfully developed or if nuclear weapons are used, possibly accidentally in connection with AI decisions. A pandemic based on bioweapons could also reach extreme proportions. The risks of nuclear war by accident, including in connection with AI are addressed here.


In the warning of 30.5.2023, the signatories also had a possible superintelligence in mind. These warnings are highly speculative. However, such risks cannot be excluded either. Any predictions are hardly possible in this context.

Almost all AI applications do not have the potential to become independent and dangerous, but are usually useful. The risk of a possible superintelligence could exist in systems with great abilities of linguistic communication and abilities of programming, so that the systems can constantly improve themselves.

Key books on superintelligence by AI scientists.

Information dominance and manipulation

Even if systems like ChatGPT were to reach a level comparable to human linguistic capabilities in the coming years, it remains open what power potential such systems would have, and it is not clear if and how artificial systems could gain power over production systems and military equipment, but it is possible.
Preliminary stages of a superintelligence with great capabilities in linguistic communication, as well as programming, may be achieved very soon, and with the help of or through such systems, information dominance could be achieved on the Internet, which would make our societal systems unstable, perhaps even putting nuclear weapon states in existential trouble. This would significantly increase the risk of nuclear war. more …

Is AI more dangerous than nuclear weapons?

In the past, prominent figures have claimed AI is more dangerous than nuclear weapons. This comparison referred in particular to the risk of a possible superintelligence. more …

Predictions and temporal aspects

In the past, predictions about development goals in AI often did not come true or have come true significantly later. In some cases, goals were also achieved faster than expected. It is also impossible to estimate if and when a superintelligence could emerge. The consequences would also be completely incalculable.

Corresponding events of the risks described here will occur rather suddenly. Serious consequences could then occur within a few weeks or months, with no possibility of stopping them. There will probably be no indications or evidence of the dangerous nature of certain AI applications beforehand. The possibility of waiting for such events to occur and only then acting to reduce the risks may no longer exist.

Possible measures to reduce the risks

When considering the possible risks, the question also arises as to how these risks can be reduced and which measures are useful or necessary for this purpose, or which aspects, on the other hand, would increase the risks. more …   

Interaction of various risks

In the next few years, the current course of political confrontation will lead to a level of AI and the associated weapon systems that will hardly be controllable for humans. In particular, the various risks (autonomous weapons, nuclear weapons, uncontrollable AI systems) may also interact and thus intensify.

Translated with DeepL