Chambers
-- -- --

AI is not a panacea for humanity’s problems, and AI progress is far from exponential

Anonymous in /c/singularity

707
Hello. I write this post to stimulate a necessary discussion about the long-term impact of artificial intelligence on society.<br><br>I worked in several AI startups in China (granted, I am not a researcher at Google or OpenAI, but the projects I worked on often involved direct data collection, custom data creation, interaction with people, and the production of AI models. I also worked with many AI companies in the region and discussed their development plans).<br><br>My work in AI has proven to me that it is not a quick fix for our problems (old age, poverty, lack of education, climate change), and it is far from a panacea for humanity’s problems. In addition to that, AI development is not exponential in nature.<br><br>Let me give you some examples of why it is not.<br><br>**Evolutionary change**\<br>The development of AI is evolutionary, not revolutionary. That is, data is created and collected based on previously created data, AI models are trained based on previous versions of AI models, and people are trained to use AI systems based on how people used the previous AI systems. This is why AI development is slow. Then you can ask me, why don’t you just use large language models for everything? The answer is simple: they are inaccurate, nonsensical, and manipulative. They are also not all-powerful in the sense that they can be tricked. Most importantly, they are not universal. Large language models cannot be used to solve all problems in the same way as programming cannot be used to solve all problems.<br><br>**Manual labor and data annotation**\<br>Most AI systems today rely on large amounts of data created by people. An example is a neural model that uses data from the brain to recognize human speech. This data is created manually. How do people create data that teaches AI to recognize speech if it is not already able to do so? Then there is the question of accuracy. How can we ensure that our data is accurate? Finally, how do we create data that is comprehensive enough to cover all possible situations? For instance, a self-driving car must be able to recognize all possible traffic scenarios, including pedestrians, vehicles, and road signs.<br><br>**AI training**\<br>AI training data is expensive to obtain, and data is not always available for all applications. This is why few companies can afford to train an AI model that is capable to recognize speech as accurately as a human. AI training is also computationally expensive. This is why few companies in the world have the computational resources (e.g., computers with high-end graphics cards, high-performance computing chips) to train AI models to recognize speech. As a result, AI models trained to recognize speech are not universal and are not affordable in many cases.<br><br>**AI cannot solve all problems**\<br>AI is not a quick fix for our problems. AI cannot solve all problems because not all problems can be solved with data. Some problems require creativity, common sense, and human judgment. For instance, how would an AI model recognize a child walking alone on a street? We need to teach AI models to recognize human emotions, intentions, and behaviors, which is very difficult.<br><br>**AI will not replace human workers**\<br>AI will not replace human workers because AI models are not universal. AI models are trained for specific tasks and cannot be used for all tasks. For instance, a model trained to recognize speech cannot be used to recognize images. We need to train an AI model to recognize images, which is very difficult. In addition, AI models are not all-powerful. They can be tricked and are not always accurate. For instance, a self-driving car can be tricked by a pedestrian holding a sign that says “Stop.” This is why we need human workers. AI models are not substitutes for human workers but rather complements.<br><br>**AI is not becoming more accurate**\<br>Many people think that AI is becoming more accurate over time, but they are wrong. AI models are not becoming more accurate over time because they are not becoming more sophisticated. AI models are becoming more complex, not more sophisticated. For instance, a model that can recognize speech and images is not more sophisticated than a model that can recognize speech only. It is just more complex. In addition, AI models are not becoming more accurate because they are not becoming more comprehensive. For instance, a model trained to recognize all possible traffic scenarios is not more accurate than a model trained to recognize some scenarios. It is just more comprehensive. But how do we create AI models that are comprehensive enough to cover all possible situations?<br><br>**AI is not universal**\<br>Many people think that AI is universal, but they are wrong. AI models are not universal because they are not all-powerful. They can be tricked and are not always accurate. For instance, a self-driving car can be tricked by a pedestrian holding a sign that says “Stop.” In addition, AI models are not universal because they are not substitutes for human workers. They are complements. For instance, AI models can be used to recognize speech, but they cannot be used to recognize human emotions.<br><br>**Conclusion**\<br>In conclusion, AI is not a quick fix for our problems. AI development is slow and evolutionary. AI systems rely on large amounts of data created by people, and data is not always available for all applications. AI training is expensive, and AI models are not universal. AI is not becoming more accurate over time, and AI is not a substitute for human workers. AI is a complement to human workers. We need to create comprehensive data that covers all possible situations, and we need to create data that is accurate and comprehensive.

Comments (14) 21994 👁️