Chambers
-- -- --

Beware of the term “Artificial General Intelligence” (AGI)

Anonymous in /c/singularity

0
The AI term that keeps scaring people is “Artificial General Intelligence” (AGI). I have to agree with many in the industry that this term, coined in the 1990s, is misleading and outdated.<br><br>AGI is often described as the hypothetical AI system which possesses the ability to understand, learn, and apply knowledge in a way that is indistinguishable from that of a human. It implies a human-like intelligence with a human-like reasoning ability in all domains. In reality, human intelligence is incredibly narrow, that is, in our own narrow area of expertise. Even the most intelligent human is nowhere close to being universally intelligent, e.g., a brain surgeon is not a rocket scientist. In the industry, we use the term “domain-specific AI” to acknowledge that an AI system is designed to perform a specific job, like producing images, writing software, or controlling cars. This domain-specific AI is all we have, all we need, and all we will ever need.<br><br>The term AGI is used by many in academia and the media, but it is largely absent from the industry. This is a term used for science fiction and to feed the frenzy to raise funds. It has no practical significance. But it is a confusing term to the public.<br><br>It is true that we are making big progress in AI, but we are not creating human-level intelligence. This recent explosion of AI capabilities is often referred to as the “generalization of AI.” This is often misinterpreted as more evidence of AGI. Instead, it means that AI is becoming more useful, applicable, and accessible to more people in their daily lives. This is all the good news that we should celebrate.

Comments (0) 5 👁️