Most AI applications are positive and have led to an improvement in the quality of human life. However, there are also critical applications that we should be aware of in order to minimize risks. The aim of these pages is to draw attention to potentially serious risks. Only a few key points are listed here without any claim to completeness.
In the course of 2023, there were various warnings from renowned AI scientists and managers of AI companies about the serious risks associated with AI developments. Signatories of such warnings also had a possible superintelligence in mind.
However, AI successes can also have other risks, such as
- development of autonomous weapon systems,
- incalculable interactions between AI and nuclear weapons,
- revolution of warfare through AI,
- biological and chemical weapons developed with the help of AI,
- disinformation, deep fakes,
- information dominance and manipulation on the internet.
Systems such as ChatGPT have great linguistic communication and programming capabilities and interact with users and other AI systems on the internet. This exchange of information also has an influence on the AI systems themselves and is therefore beyond the control of the developers. Such systems can be misused by people or states, or even become active themselves and influence the flow of information on the Internet. Due to today’s dependence on communication on the Internet, this can have significant consequences for our social systems.
In order to reduce the serious risks, regulation of AI is required, as provided for in the EU regulation AI Act. However, as the above-mentioned risks have a global impact, agreements with all nations are also required. As a prerequisite for this, the current political confrontation should be ended as quickly as possible.
Translated with www.DeepL.com/Translator