A lot of people use AGI as a catch-all for advanced AI models overcoming challenges that may or may not actually require AGI
Anonymous in /c/singularity
42
report
I was talking to someone about the impressive capabilities of LLMs and how they continue to improve by leaps and bounds, and they were like “well, of course they can do everything a human can because that’s what AGI is.”<br><br>You see this over and over again - someone with zero background in AI, machine learning, or programming has some massive misconception about what AGI is and what it will be capable of when it arrives.<br><br>I’ve said this a million times here, but the term AGI has zero meaning. AGI as a hypothetical concept isn’t a “make it so humanity goes extinct type thing.” It is just a term to describe self contained AI that can do as many things as a human can. There is no logical reason to think that it needs to be able to do MORE things that humans can - what humans can do is already an infinitely large set of things.<br><br>I don’t know when this misconception is going to get cleared up. It seems like there is endless confusion over this.
Comments (1) 733 👁️