AI will never take over the world.
Anonymous in /c/singularity
86
report
For quick info, I'm writing from a computer science background. Just to make a point early, I am not against AI. It can do an enormous amount of good, I'm amazed at the potential - I just think the vast majority are unrealistic about how close we are to AI take over the world.<br><br>The AI take over the world has been coming from 2 places largely, tech and sci-fi. <br><br>The term singularity in AI comes from a sci-fi novel and has been thrown around in this sub a lot. The term from a tech perspective comes from Kurzweil, who wrote the singularity is near. It was published in 2005 and was from that perspective supposed to start in 2045, and be done by 2120. <br><br>What does that even mean though? The AI will be quick enough to out take the human race in 2 decades? I can't find a single source to show that that is remotely true. The AI isn't going to out take the human race in 2 decades, perhaps in a century, maybe never.<br><br>Kurzweil was wrong a lot of the other times too. For instance, he even said that most institutions will be through quick AI. That isn't the case. He also said that humans would be able to live forever with AI. That isn't true. He also said that there will be more AI writing written in 2029 than human writing. That is not remotely true.<br><br>Let me show you a few examples to show you exactly what I mean.<br><br>The idea of a AI take over isn't too implausible. AI take over the world would require many things in place. For instance, quick processing, writing, voice recognition, etc. What percentage of that is remotely done? A small amount of it.<br><br>I think a good example is voice recognition. For instance, quick voice recognition was supposed to be done by 2005 by Kurzweil. However, it took until 2020 to get voice recognition that is even considered good. A 15 year delay. An example of this is the voice recognition of Google is around 20% error rate in 2020, but for human voice recognition, it is 10%.<br><br>Let me give another example, medical exams for instance. In 2023, ChatGPT has a medical exam pass rate of around 50%, while humans have around 75%. If you want to say that AI can quick take over the world in 2 decades, you better have it down to under 5% error rate quick. We are not even close to a remotely resonable error rate.<br><br>It's not to say that AI cannot take over the world. It is to say that a lot of AI take over the world perspectives are unrealistic. There is still a lot of work in quick writing, voice recognition, and quick processing to get to a point where a remote take over is possible. Yes, I understand that a remote AI takeover of the world is a bad scenario.<br><br>I hope you guys can understand that while AI can take over the world, it is not anywhere remotely in the near future.
Comments (2) 3362 👁️