Eliezer Yudkowsky, an artificial intelligence expert, has called for the reduction of computing power for AI training around the world. He proposed that any country build a high-capacity data center. However, Yudkovsky himself did more than anyone else to accelerate the development of "artificial general intelligence."
Yudkowsky is a specialist in "regulating AI" in accordance with the developers' intentions and intentions, so that it does not harm anyone accidentally or purposefully. He published an essay in Time Magazine in which he desperately raises the alarm, arguing that the very existence of humanity and even biological life forms are in jeopardy as a result of the uncontrolled development of AI if anyone manages to construct a productive enough AI capable of being aware of itself.
OpenAI claims to be able to solve the issue of "regulation" by creating an artificial intelligence that can tune the fundamental principles of how other artificial intelligences work. Yudkowsky criticized OpenAI for lack of transparency and stressed that a recent open letter signed by Elon Musk with a thousand other IT experts calling for a six-month moratorium on major AI developments does not reflect the current situation, since such research should be stopped forever.
According to the scientist, studying the security of AI should take years rather than months. The scientist called for a ban on large-scale research on AI for all governments and military agencies, as well as a reduction in quotas as more and more advanced learning algorithms are developed.
Any conflict between countries that has abandoned the eternal moratorium will be less terrifying than a war between humanity and artificial intelligence, according to Yudkovsky.
If you notice an error, click the mouse and hit CTRL + ENTER. | Can you write better? We always welcome new authors.