Leaders at OpenAI, warned in early July 2023 that superintelligence could emerge before the end of this decade. Even if this were to occur, the question is in what form potential hazards might occur.
Preliminary stages of a superintelligence with great capabilities in linguistic communication, as well as the planning of actions, can be achieved very soon, and such systems could achieve, among other things, previously unknown cyberattack or Internet manipulation capabilities. Such systems could then be abused by humans or states, or even become active themselves, and dominate the flow of information on the Internet, thus crippling human information flows. Thus, with the help of or through AI systems, information dominance could be achieved that would affect all areas, including finance. As a result, finance and commerce could collapse, at least temporarily, and our social systems could become unstable.
If nuclear weapon states get into existential emergency situations in this context, the risk of nuclear war would increase considerably. Since the dependence on technical systems is now very great, which also applies to communication on the Internet, worldwide crises would be the consequence, and in such critical situations errors in early warning systems for nuclear threats could easily lead to nuclear war by mistake. (see www.unintended-nuclear-war.eu )
Translated with DeepL