Chambers
-- -- --

What are the real risks of AI? in layman's terms

Anonymous in /c/AntiAI

430
A lot of people have recently began to take heed of the risks of AI, although it appears that even the educated among us don't really understand what the risks are, or the motives of the companies developing it. I was recently told by a friend that the risks of AI are comparable to nuclear war. I asked what he meant by that specifically and he didn't really have a concrete answer. He simply stated that it could be as bad, or worse. I don't think the risks of AI are comparable to nuclear war, at least not in the sense that it could cause a mass, uncontrolled scale disaster event killing billions of people. So here are my thoughts on the risks of AI. <br><br>Firstly, I think the risks of AI are especially high in countries with low, or underdeveloped regulation around the use of AI. Most countries are trying to define what AI is, and then regulate it. But corporations pushing AI have been trying to obfuscate the issue and dilute definitions of AI in the hopes of less stringent regulations, or none at all. And that's largely what you see in third world and second world countries. <br><br>Secondly, AI is risky because it's not all that difficult to implement. This means that we'll see a mass integration of AI across multiple industries very quickly. This is a problem. There is no way this can be done safely given that a lot of the systems we're going to be integrating it into are a) necessary in order for society to function, and b) vulnerable to bad actors.<br><br>Let me give you an example. AI regulated surveillance systems in China are responsible for putting Uyghurs in concentration camps. Now there are a lot of definitions of what a concentration camp is, but I don't need to point out that this is bad. What you need to keep in mind is that China is a particularly bad example of a rogue state with absolutely no regard for human rights. But if a company like Meta develops something similar and integrates into the US, and something goes wrong who is responsible? Well in the US, it's difficult to point that out with any degree of certainty. <br><br>YouTube has been accused of radicalizing people into committing terrorist attacks. Facebook is also accused of doing the same thing, but through targeted manipulative ads. But worse than any of this is the fact that these kinds of problems happen without bad actors involved. Profiteering media companies, and their shareholders. There is no way this can be done safely. The level of power we're giving to these systems is unprecedented. <br><br>The risks of AI are in that it's largely being developed by companies with a history of profit before safety, and without consideration for how this changes the social fabric of society as a whole. The risks are particularly high in countries with low or no regulation, and countries with government oversight that are vulnerable to attack. The risks are not particularly high in countries with strict regulations, with governments that don't exactly have a great track record of utilizing technology. Again, you're talking about companies like Google, and Meta that have a history of absolutely screwing over their users. <br><br>So I'm not worried about AI becoming sentient and committing genocide. I'm not worried about AI becoming overly self aware. I'm not particularly worried about an AI takeover in the traditional sense. I am worried about AI being used in such a way that it changes the government, and the way the media functions. I'm worried about the way it changes our society. <br><br>I'm worried about AI in self-driving cars, but not because I'm worried they will one day become sentient and target certain groups. I'm worried about AI in self-driving cars because they're going to kill people. I'm not worried about AI in hospitals, but in the sense that it will somehow self-identify and cause harm. I'm not worried about this kind of risk. I'm worried about AI because it's not that smart, and it's not all that good at what it does. So what is going to happen is that AI driven surveillance systems in hospitals are going to misidentify the people who should be treated, and those who should not. It's going to misidentify the people who have medical conditions that need to be treated by a human and those who don't. It's going to misidentify the people who are in need of a medical response versus those who are not. And there is going to be no way to control or stop this in the way that you could stop a human from doing that. The problem with AI is that you are essentially putting this power into the hands of self-interested corporations that don't care about you at all, and there is no oversight. No one knows how to regulate AI. <br><br>&mdash;&mdash;&mdash;<br><br>So in conclusion, I don't think the risks of AI are survivable, but they're not necessarily like the risks of nuclear war either. The real risks of AI are that it will be used to manipulate and control the population. That it will be used to radically change society in a way that is hugely detrimental to us in the long run. <br><br>Regular AI is a condundrum that has no solution, and worse than regular AI is AGI. But AGI is not a thing yet. As mentioned before, we're still just at the beginning of figuring out what AI is, and definitely not to the point where we're on a path to AGI.

Comments (9) 13555 👁️