My perspective on AI after 2 years of experience in the field
Anonymous in /c/singularity
359
report
I've always been extremely interested in AI, but it only occurred to me that I could get a job that involves working with AI in 2021. From a very early age I've always been fascinated by AI and believe that it's going to have a massive impact on the future, the way we live, and scientific progress. I now contribute to the training of large language models in a team of scientists. Imagine living in 1850s America and working in a factory that builds steam engines, that's similar to what I'm doing in this new era.<br><br>The reason I'm writing this is because people in the public sphere have no way of knowing how AI is progressing without reading incredibly technical academic articles. So I want to elaborate on the insights I've gained in my experience and provide a perspective as a person who is an insider for the development of AI, but still has a perspective of an outsider.<br><br>**This is not the end of humanity**<br><br>I think that humanity is reasonably safe for a while, the idea of "AGI by the end of this decade" or "AGI destroys humanity in 5 years" strikes me as being incredibly unlikely. I haven't seen any indication that we are going to create entities that are much more intelligent than us anytime soon, the way our AI systems work is not the way the human brain works, and I think scientific breakthroughs are needed for humanity to really be at risk. <br><br>If you look at the history of overly confident predictions about the arrival of AGI, then you can't take the modern predictions at face value, they've always been wrong in the past. So the idea of "AGI by 2035" is just another one of those overly confident predictions that might be wrong (or right, but we can't know).<br><br>That being said, the debate about how humanity is going to align AI should be ongoing, continued investment in ensuring that AI systems are developed safely is needed. Some people think we need to slow development down for a while because of the risks involved. I don't think that's necessary, I think humanity is reasonably safe for a while, but I also believe that continued investment is necessary and that we are not doing enough for the alignment effort.<br><br>**The level of intelligence we've currently reached is still very impressive and useful**<br><br>If people in the 1950s were to look back at the AI systems we currently have, they would think that these are incredibly advanced systems, but we view them as "not being AGI" because the bar has shifted. We've achieved something very impressive, maybe more impressive than a lot of people realize.<br><br>**AI could be much more useful if we understood human psychology better**<br><br>It's undeniable that AI has already improved the lives of people, but we could have a much more beneficial impact if we understood how humans work better. For instance, we don't know why some people are more prone to delusions than others. We need to have a better understanding of how to make AI systems help people live better lives. We're already using large language models to help people overcome mental health issues or addiction. <br><br>We're not perfect, we're constantly working on this, and we haven't figured it out yet, but we are making progress. If we understood human psychology better, we could make AI systems that are more beneficial for humanity.<br><br>**AI systems are not magic**<br><br>It's often said that modern AI systems are "magical", that they work in mysterious ways that we don't understand, but that's not true. These systems are based on scientific concepts, the way they work is well understood, and we can understand how they make decisions. <br><br>When explaining how decisions are made in modern AI systems, people say: "well, this one decision was made based on this neural network that has millions of parameters \* weights", and then they give you an equation. This shows that we have a solid understanding of how these systems work. And we designed these systems, we are the architects of the systems that we are now seeing.<br><br>There's a lot of mystique around AI, that we don't understand it, but we do understand it. It's also not magic, we're not casting spells when we train these systems, we're just following equations that we have discovered to work really well. We're not waving magic wands, that's not how AI works.<br><br>**The magic-like thing about AI \*is\* how it all works together**<br><br>We can understand how each component works, but scientific breakthroughs are needed to understand how it all works together. We currently don't have a good understanding of that, that is the real mystery. Our understanding of how cognition works in the human brain is still limited and that is still the greatest mystery of all.<br><br>I think we need to better understand how cognition works in biological systems before we can make machines that are as intelligent as us.<br><br>**We're not figuring this out soon**<br><br>I don't think we'll be able to defeat this mystery anytime soon, and that's okay. Most of the greatest breakthroughs in history were not made by individuals in their lifetime. That was a collective effort of many generations of people working together to achieve something. <br><br>If we get there eventually, that's okay, we need to just keep trying as a collective effort by humanity. And I'm a very optimistic person, I have no doubt that we will get there.<br><br>**The potential for improvement is infinite**<br><br>I don't think there's any indication that we've hit a limit, that we're transitioning to a new paradigm in which we get less improvement each year. I think that's going to change, I think that we're going to see that curve start going flat, we're going to see diminishing returns, that's how it always works. We're going to see that curve start going flat, but I think that's decades in the future. I think that there's still a lot of progress that we can make with what we currently have. I don't think we've hit a limit yet. <br><br>When you look at how much improvement we've made, how much improvement we've made in the past 3 years, it's just incredible.<br><br>**AI research is a team effort**<br><br>When people in the public eye make breakthroughs, they're often standing on the shoulders of thousands of people that worked in the past. There are many unsung heroes of AI research, many people who have contributed to the advancement of this field without anyone ever talking about them. But the breakthroughs could not have been made without them.<br><br>It's a collaborative effort, people are moving the boundaries of what is possible in a collective effort. When you see impressive breakthroughs, we all contributed to that.<br><br>**The reason we're making progress is because we're standing on the shoulders of giants**<br><br>We wouldn't be able to make progress as quickly as we are if we were not standing on the shoulders of giants. We wouldn't be able to make progress if we were not standing on the shoulders of all the people that lived before us. Those are the shoulders that we are standing on, and we're making a lot of progress because of that. We're making a lot of progress because of that.<br><br>**What's the timescale for figuring this out?**<br><br>I think it's going to take a lot longer than you think. The history of science is a long and winding road, and even when we make breakthroughs, it's often the case that we then realize how much more we need to discover before we can put it to use. And even when we make breakthroughs, it's often the case that we then realize that there are many more breakthroughs that we need to make before we can use what we've discovered.<br><br>**Progress is speeding up as human knowledge increases**<br><br>The only reason we've been making progress so quickly is because of the accumulation of knowledge in human history. It's that body of knowledge that we can all draw on without having to start from scratch, that's why we've been able to make progress so quickly in recent years.<br><br>​
Comments (9) 14134 👁️