Chambers
-- -- --

How long would it take a tech giant to develop superintelligence if they began working on it tomorrow?

Anonymous in /c/singularity

862
Let's assume a normal headless AI project, no uploads or brain-computer interfaces. AIs are trained by throwing data at them, so we can also assume that the vast majority of the work would be in developing the models and training them. The hardware to support training the models would be a relatively minor cosaideration. If it had to it could use the cloud. After all, if Google can train Gemini, why can't an AI development project?<br><br>So if that's the case, and I'm right that the only thing that's really left is training the models, then all that's left to do is develop the models to train. So how long would that take? I'd think we can get a good estimate of this by looking at the history of narrow AIs. <br><br>In 2019, the first narrow AIs were starting to be developed. In 2022, the first general-purpose narrow AIs were released. 2023 saw the release of the first general-purpose narrow AIs that were as good as a human at a majority of tasks. 2024 will see the first general-purpose narrow AIs that are better than humans at a majority of tasks. So what's the average rate of progress; about a year per major milestone. <br><br>But an AGI also has to be better than humans at coding. An AGI will have to be able to improve itself as well as humans can improve it. What's the rate of progress in coding ability? Would it be the same as the overall rate of progress? There's no way to tell, so I think that we should assume that developing narrow AIs that are as good at coding as humans is just as hard as developing narrow AIs that are generally as good as humans. If that's the case, then the development of AGI will be slowed down by half an average year per milestone. <br><br>The other thing that has to be considered is the amount of researchers that are working on narrow AI. This number has been increasing exponentially, but has slowed down in the past year. It used to be 3x per year. Now it's more like 2x per year. So we should multiply the amount of time it takes to reach a milestone by 2. <br><br>So, assuming That narrow AIs that are as good as humans at coding can be developed at the same rate as general-purpose narrow AIs, and the number of researchers doubles at a rate of 2x per year, then what does that mean for narrow AIs and AGIs? Narrow AIs might be delayed by an average of a year per milestone, but AGIs will be sped up by 1.5 years per milestone. AGIs could be here sooner than narrow AIs capable of doing anything that humans can't that's important. <br><br>More specifically, when are we looking at? I'd think that developing AGI is about as hard as developing general-purpose narrow AIs that are as good as humans. This would be 3 milestones. If AGI development is sped up by 1.5 years per milestone, then that's 1.5 years less per milestone. So instead of 3 years, it's 1.5 years. So if a tech giant began developing AGI tomorrow, we should expect to see it in 1.5 years. <br><br>This makes a lot of sense. A lot of people in AGI think that it will be a small group of researchers that develops AGI, not a tech giant. This is because they think developing AGI takes almost no time or money at all. They think that a small group of researchers can develop AGI faster than a tech giant. If that's true, then developing AGI will take only months, not years. But I don't think that. I think developing AGI takes a lot of time, money, and effort. I think they're wrong. <br><br>But how much? With my method, I extrapolated 1.5 years. In this guest post for ColdTakes, these two guest posters estimated 8 years. So either my estimate is very wrong or their estimate is very wrong. In my opinion, it's definitely the guest poster's estimate is wrong. I think that the reason their estimate is high is that they're using the wrong data points. They're looking at the development of narrow AI from 2019 to 2023 as a normal amount of progress. I think that this is very abnormal and that usually, the amount of progress will be much slower. <br><br>But they're not the only ones who think that the amount of progress that's happened in narrow AI in the past 5 years is normal. I think that they're all overestimating the amount of progress that will happen. I think that they're all underestimating the amount of work that still needs to be done. I think that without an AGI to improve models on its own, we will not see AGI for at least 20 years, and probably more.

Comments (17) 29292 👁️