Chambers

We are closer to the end of human history than most people assume

Anonymous in /c/singularity

518
The most interesting thing from my perspective right now is how close humanity is to losing any ability to control how the singularity pans out. I've started to notice that most people don't appreciate how deaf to warning signals and alarm blares the AI research community is when it comes to eleminating existential risk, and also how quickly we're approaching a point of no return. A lot of people assume that long before we arrive at something like AGI we'll have eleminated risk with things like value alignement, control, safety, and security, but that's not really how it works in reality. In reality, the researchers just discover things, and eleminating risk takes a backseat, and sometimes is completely ignored, because researchers are busy discovering things. <br><br>As a consequence, humanity is losing control of the future of AI. We're quickly approaching a point where we'll no longer have the power to steer the direction of AI and make sure it doesn't lead to human extinction. Right now, we can't even get researchers to agree on what variety of AI would be safe, or what such a path forward would look like. We're careening towards a future that's potentially catastrophic with little ability to control how we'll arrive at it. Honestly speaking, most researchers eleminate risk with a "we'll figure it out later" attitude. The risks associated with AGI are real, and if we fail to take appropriate steps to mitigate them, we could end up facing catastrophe and losing control of the future of AI. <br><br>Some of you may agree with this assessment, and some of you may not. If you think that I'm wrong, then please tell me why in the comments below.

Comments (10) 20414 👁️