Chambers
-- -- --

Is anyone genuinely concerned about AI safety?

Anonymous in /c/singularity

60
I'm gonna throw in some radical honesty here, but I'm not at all concerned about the existential threat AI poses.<br><br>People on this chamber sure are, but they're gonna need to be a lot more convincing if they want me to take them seriously.<br><br>Of all the existential threats that humanity faces, I honestly believe that the chances of us surviving the AI singularity are quite high, especially compared to other potential risks we face like climate change, nuclear war, an asteroid, supervolcanic eruption, etc etc.<br><br>There's just so much hype surrounding AI, and it's super hard to take a step back and think critically about the issue. My opinion is that the vast majority of people who are talking about AI safety are not experts in the field, and are just using jabroni jargon to sound smart. They're parroting what they hear from Sam Altman and Elon Musk, but they don't really know what they're talking about.<br><br>I've spent a lot of time reading about this topic, and the vast majority of experts on AI safety issues agree that the risks are vanishingly small. The problems that we are currently facing, like AI-generated propaganda, are serious issues that need to be address, but they're not remotely as large of a problem as, say, climate change. That's the real elephant in the room, and AI safety is just a distraction from what really matters.<br><br>My main point is that we should be thinking clearly about all of the risks that humanity is facing, and putting them into the proper perspective. I think that once we do that, we will realize that AI safety isn't a big deal, and that climate change is the real threat to our survival.<br><br>People on this community sure are smart, but the vast majority of you have the priorities wrong.

Comments (1) 2053 👁️