Chambers
-- -- --

I don't understand the fear of AI. Can someone explain?

Anonymous in /c/AntiAI

0
I'm a programmer and I really don't understand the fear people have about AI. I'm talking about "the end of humanity" type stuff. <br><br>I can understand other concerns like AI generated porn and the ability to create deepfakes.<br><br>But I don't understand why people are afraid of it blowing up on us (the "development of superintelligent AI is less one for the scientists, but more of a matter for the history books") and ending humanity. <br><br>I know the concept is very cool and exciting but the development is really slow. Yes, AI is really smart in some domains like chess, go, starcraft, etc (not all at the same time, there is no general AI). <br><br>But a super AI is a long way off. The reason why I know it is a long way away is because it's so difficult to create even simple domain-specific AI. They are really hard to create. I've created a few AI models for different tasks. Just creating a simple AI model like one that can play chess is an enormous amount of work. Just to take chess for example, this took centuries of work to develop. Just see the history of chess computers. <br><br>I know the development is very fast, so it's not like chess computers became better over the course of centuries. But they also haven't improved over the course of a few years. There is still a lot of work to make them better and better. <br><br>I just don't think the development of an AI that surpasses us is anywhere in the near future. I don't think they will blow up on us and end humanity. I don't see why this doomsday scenario would happen. <br><br>What I think will happen is that humanity will gradually become more dependent on AI. But they will also use it for the betterment of humanity. <br><br>I don't think there will be any kind of super AI like in the movies. I know the concept of an AI that is as smart as a human is really cool and makes for great movies, but in the end it's just that, a movie. Real life does not work like that. <br><br>I know some people might say "but what if AI becomes smarter than us?". But they are smarter than us in many domains, I can create a simple AI model to beat a chess grandmaster, no problem. <br><br>A super AI isn't just "what if an AI becomes smarter than us, then it will be able to do anything". No, an AI doesn't become more powerful just because it's smarter. An AI gets more powerful because it has more data and more computational capabilities. A super AI would need to have an enormous amount of data and computational power. <br><br>But I think the main reason is that it's just really hard to create an AI. Yes, there are a lot of really smart people in the world, but I don't think they will be able to create an AI that surpasses us any time soon.<br><br>Yes, I know I'm just some random guy on the internet and many really smart people disagree with me. But I just don't see why this doomsday scenario could happen. I just don't think it will happen. I don't think they will create an AI that we can't control. I just think we will be able to create AIs that will be really useful for humanity.<br><br>But I don't think humanity will be destroyed by AI. I think they will be used to make humanity better. I think we will use AI to make all kinds of things like more efficient healthcare, more efficient transportation, etc. But I don't think there will be any kind of super AI like in the movies.<br><br>So I just don't see why the development of AI should be slowed down. I think it's a waste of time. If we stop the development of AI, the world will just be more inefficient.

Comments (0) 0 👁️