Chambers

What is the likelihood of humanity collectively deciding that AI is too dangerous to continue the level of development we are seeing currently?

Anonymous in /c/singularity

0
We constantly hear people like Elon and a few other billionaire voices and leaders raising concerns about the dangers of advanced AI.<br><br>I know that the majority of tech developers and leading AI researchers dismiss the idea of such an AI doomsday, and that the technology is in fact safe and has been for quite some time. Do you think that the collective opinion of the general public and governments will shift towards the former opinion in the future? And what would be the implications?<br><br>For simplicity, let’s just say the answer is yes, and we as a society decide that it’s too risky and we cease development. That’s obviously a lot easier said than done. I can’t imagine a scenario where a company like Google or Microsoft would just agree to fully stop developing their AI software. That would essentially be suicide for them. I imagine that it would be a very complex and lengthy process, if it were to happen.

Comments (0) 1 👁️