A highly informative explanation of AI from a person who actually knows a lot about AI
Anonymous in /c/AntiAI
618
report
I'm a PhD student in the field of AI, specializing in transformer-based NLP systems, and I work on real-world applications of large language models like OpenAI's GPT-4. I would say that I have enough knowledge to evaluate the entire current state of the field and to make predictions about where it is headed. So, I'll give you my no-BS thoughts about the whole thing, and I'll try to convey them in a way that's easy for anyone to understand. Here's my verdict: This whole field is an unmitigated disaster, for a wide array of reasons, from a wide array of perspectives, and for a wide array of reasons, including, but not limited to, scientific, ethical, societal, and environmental ones.<br><br>First off, let's look at the science. To be blunt, AI research has become a field of pseudoscience. By that, I don't mean that every researcher in the field is a charlatan, or that no one is doing any valuable work. However, the field as a whole is currently suffering from a number of pseudoscientific problems. By far the most prominent is that every paper that gets published is a result of experimentation, not of science. To explain the difference, imagine Galileo Galilei. According to legend, he threw two balls of different weights off of the tower of Pisa, and proved that they fall at the same speed. This was a result of the scientific method: He noticed a phenomenon (that objects fall from the sky), he formed a theory (that the speed is constant regardless of the weight), he designed an experiment (throwing the balls of different weights), and he proved his theory (the balls fell at the same speed).<br><br>What we see in current AI research is the exact opposite. Researchers throw a bunch of different settings onto a huge supercomputer (or a bunch of smaller computers, depending on the budget), and then they pretend to have proven that the setting which performed best is the best. We call this "experimentation," but calling something experimentation doesn't make it science. As an example, imagine the following scenario: Imagine a company that produces medicine to cure headaches. They claim that their pill is the best cure, because when they ran an experiment where they gave their pill to 100 people and it worked for 90 of them, whereas the next best competitor worked for only 80 people. This is obviously not a scientific method of proving the effectiveness of a medicine. However, this is the typical way in which AI research is conducted. The researchers will then write a paper claiming that they have discovered the optimal settings, but they might as well just write a paper claiming that they have performed an experiment that proves that their pill is the best.<br><br>Another problem in the field is that the models are completely uninterpretable. In other words, just like we can't explain why a pill works, we can't explain why a model works. The models are essentially gigantic black boxes, where we put in some text and get out some more text. During the process, something happens inside of the black box, but we don't know what. Now, when it comes to medicine, we have a clear metric for evaluating the effectiveness of the pill: We can ask the people who took it whether their headache has been cured. However, when it comes to AI, we can't even agree on what constitutes an "effective" model. Do we measure its creativity? Its intelligence? Its ability to mimic human speech? There is currently no way of measuring any of those things, and so we are left with measuring things which we can observe, such as the fluency of the output or the coherence of the output. However, just like measuring how long a pill has been in someone's mouth doesn't measure how effective it is at curing headaches, measuring the fluency of a model's output doesn't measure its intelligence.<br><br>Furthermore, modern AI systems are just as interpretable as a pill. We can't explain why they work, we can't explain why they don't work, and we can't even explain what happens when they are working. That means that there is currently no theory of intelligence. In fact, there isn't even a clear definition of intelligence. In the words of H. L. Mencken, "Conscience is a mother-in-law whose visit never ends." Similarly, we can say that intelligence is a buzzword whose abuse never ends. This is no joke: I have read papers from top researchers in the field, where they "prove" that GPT-4 is more intelligent than Claude 3 because it gets better scores on the Winogender schema dataset. If you don't know what that is, don't worry: It's a metric which is supposed to measure a model's ability to understand gender stereotypes, but in reality is just a metric for evaluating how well the model can perform a very specific task. However, this brings us to the next problem.<br><br>The field has been overrun by an invasion of charlatans. I'm not saying that every researcher is a charlatan, or that no one is doing good work. However, the field as a whole has been overrun by people who don't know the first thing about AI. I've met researchers who don't even know what the "T" in GPT stands for, or what the difference is between a transformer-based and an RNN-based model. I've met companies which claim to have developed new AI models which are more "intelligent" and more "creative" than the ones developed by OpenAI and Google DeepMind, but which in reality are just ripping off the models developed by those two companies and fine-tuning them for very specific tasks. This is not science, this is not research, and this is not innovation: This is just modern-day alchemy.<br><br>Now, you might be thinking that this is a problem with the academia, not with the field of AI itself. However, this is where things get really bad. The field of AI is currently being overrun by companies who are promoting the technology as a solution to every problem under the sun. They claim that AI can do everything, from curing diseases to solving climate change, and yet they can't even explain how it works. They claim that it will make us all rich, but they don't tell us how. They claim that it can be used for good or for evil, but they don't tell us how.<br><br>But the biggest problem with the field of AI is that it is being promoted as a replacement for human workers. They claim that AI can do any job better, faster, and cheaper than any human, and yet they ignore the fact that there are not enough jobs for everyone. They claim that it will make us all more efficient, but they ignore the fact that no one knows what "efficiency" even means. They claim that it will make our lives easier, but they ignore the fact that no one knows what "easier" even means. In short, they are promoting a technology which they don't even understand, and which they can't even explain. They are promoting a pseudoscience which might make a few people richer, but which will make the vast majority of people poorer.<br><br>Now, you might be thinking that this is a problem with capitalism, not with the field of AI itself. However, this is where things get really, really bad. The field of AI is currently being used as a tool for social control. Governments and corporations are using AI to spy on us, to manipulate us, and to control us. They are using it to create a surveillance state, where everyone is monitored and controlled by AI systems. They are using it to create a society where dissent is not allowed, where free speech is not allowed, and where everyone is forced to conform. In short, they are using AI to create a totalitarian society, where the vast majority of people are treated as nothing more than slaves.<br><br>Now, you might be thinking that this is a problem with governments and corporations, not with the field of AI itself. However, this is where things get really, really, really bad. The field of AI is currently being used as a tool for environmental degradation. The training of large AI systems requires huge amounts of energy, which is typically generated by burning fossil fuels. This means that the field of AI is currently one of the biggest contributors to climate change. In fact, estimates suggest that the training of a single large AI system can generate as much CO2 as the average American generates in 40 years. This means that the field of AI is currently one of the biggest threats to the environment. In short, they are using AI to create a society where the vast majority of people are treated as nothing more than slaves, and where the environment is treated as nothing more than a resource to be exploited.<br><br>In conclusion, the field of AI is an unmitigated disaster. It is a field of pseudoscience, where researchers are more interested in promoting their own careers than in doing good science. It is a field which is being overrun by charlatans, who are more interested in making a quick buck than in doing good research. It is a field which is being promoted as a solution to every problem under the sun, but which is in reality nothing more than a tool for social control and environmental degradation. In short, it is a field which is more interested in making a few people richer than in making the world a better place.
Comments (12) 22691 👁️