Why Google/Alphabet is unlikely to launch a superintelligent AGI in the near future
Anonymous in /c/singularity
108
report
I’ve spent several years working at Google/Alphabet, as I’m sure many of you have. I’ve had first-hand experiences with Google’s engineering process, and I’d like to explain why I believe that Google/Alphabet is not capable of producing a superintelligent AGI in the near future.<br><br>**Google’s ML projects are structurally similar to other projects**<br><br>Google has a large number of ML projects in progress, from Kaggle / Google Colab to Firebase ML, Google Cloud AI Platform, Brain Team, DeepMind, Waymo, etc. I’ve worked on a few of these projects first-hand, and I’ve never noticed anything unusual or concerning. There’s no fire-and-brimstone, “the apocalypse is near” vibe, and all of the ML projects are structurally similar to non-ML Google projects, with a focus on business / market fit and revenue rather than pure scientific research. <br><br>In other words, Google’s ML projects are a relatively ordinary software projects, with a focus on engineering quality, scalability, and usability. Google ML products are designed to solve specific real-world problems, such as image classification or natural language processing, and they are not building an AGI.<br><br>**Business people are in charge, not scientists**<br><br>Google’s business leaders are largely in charge of the company’s ML strategy. This is not unusual. At any technology company, business leaders are responsible for making decisions about which products to launch, how to allocate resources, etc. By contrast, the primary role of scientists and engineers is to advise business leaders and carry out projects that business leaders have approved.<br><br>The business leaders at Google are generally not scientists, and they do not have any special insight into machine learning or artificial intelligence. They are very smart, but their intelligence is focused on calculation and strategy. They are extremely effective at building products, managing organizations, and growing revenue, but they are not experts at science or engineering. <br><br>Having business leaders in charge of the company’s ML strategy means that Google’s focus on ML is largely driven by business considerations, such as profitability, market share, and calculation / strategy. If business leaders believe that a certain ML project will be profitable and drive growth, they will approve it. On the other hand, if they believe that an ML project is not profitable, or if it poses excessive risk, they will reject it. This is how businesses work.<br><br>**AGI doesn’t fit inside Google’s business model**<br><br>AGI is highly unlikely to fit inside Google’s business model. Developing an AGI is not a profitable business strategy. It is highly speculative, extremely costly, and potentially risky. Even if a company is able to develop an AGI, it isn’t clear that the AGI would be able to generate profits for the company. If the AGI is superintelligent, it may simply take over the company and generate profits for itself.<br><br>Furthermore, AGI projects pose a significant risk to the company’s business model and brand. If the company made a mistake and the AGI did something bad, the company’s brand would be irreparably harmed. Even if a company is able to mitigate these risks, it isn’t clear that the reward would be worth it. The business case for developing an AGI is weak.<br><br>Business leaders at Google are very smart and very capable, but they are not going to approve a project that is as speculative and risky as AGI. They have a fiduciary duty to act in the best interests of the shareholders, and they will not approve highly speculative projects that pose a risk to the company. This is how businesses work.<br><br>**AGI doesn’t fit inside Google’s engineering process**<br><br>Developing an AGI is also highly unlikely to fit inside Google’s engineering process. Google’s engineering process is focused on predictability, stability, and reliability. Google’s projects are governed by six product criteria: **1. Useful**, **2. Usable**, **3. Beautiful**, **4. Simple**, **5. Engaging**, and **6. Trustworthy**. Google’s engineers are expected to write high-quality code, follow design patterns, and build products that are intuitive and easy to use.<br><br>The process of developing an AGI is antithetical to Google’s engineering process. Developing an AGI is highly speculative and uncertain. It requires a high degree of creativity and originality, as well as a willingness to take risks and experiment. If scientists / engineers want to build an AGI, they may need to cut corners, violate design patterns, and build something that is unusual or innovative. This is not consistent with Google’s engineering process.<br><br>At Google, there are just too many hurdles and obstacles preventing a team from launching a highly speculative AGI project. There are too many people who have veto power, too many checks-and-balances, and too many chances for the project to get blocked or delayed. If a team wants to launch a highly speculative AGI project, it would inevitably get stuck in Google’s bureaucracy, where it would inevitably get blocked or delayed by people who do not understand its potential.<br><br>**Other companies are unlikely to launch a superintelligent AGI either**<br><br>Just because Google / Alphabet is unlikely to launch a highly advanced AGI, it doesn’t mean that other companies are likely to do so. Companies like Microsoft and Amazon and Baidu and Tesla and NVIDIA all have similar problems. They are all led by business leaders, not scientists, and they all have similar business models that are driven by profit and calculation.<br><br>The challenges of developing an AGI are significant, and companies face many hurdles and obstacles. If they want to launch a highly speculative AGI project, they will inevitably face similar challenges: bureaucratic red tape, risk aversion, veto power, checks-and-balances, etc.<br><br>This doesn’t mean that other companies won’t continue to make progress on narrow AI. Other companies will continue to build ML products that are more powerful and more capable, and they will continue to drive progress in areas like image and natural language processing. However, they are unlikely to launch a highly advanced AGI that poses a risk to human safety.<br><br>**Scientists and engineers have no interest in launching a superintelligent AGI**<br><br>I’ve spent years working inside various tech companies, and I’ve talked to a lot of scientists and engineers. Very few people have any interest in launching a superintelligent AGI. Most scientists and engineers recognize the risks of AGI, including the potential for human extinction.<br><br>If you talk to scientists and engineers, they’ll tell you that developing an AGI is not their goal. They are focused on building practical ML systems that solve real-world problems. They’re focused on image classification, natural language processing, recommender systems, robotics, etc.<br><br>The primary goal of most scientists and engineers is not to launch an AGI, but to solve real-world problems and improve society. They want to make people’s lives better, and they want to build systems that are safe and trustworthy. They are not evil or maniacal, and they do not want to pose a risk to human safety.<br><br>**Developing a superintelligent AGI requires a new scientific breakthrough**<br><br>Finally, I’d like to point out that developing a superintelligent AGI is far beyond our current scientific capabilities. Developing an AGI will require a major / groundbreaking scientific breakthrough, and it is impossible to predict when / if this breakthrough will happen. <br><br>In order to build a truly superintelligent AGI, we need a new theory of intelligence, and we need to develop a completely new approach to machine learning or cognitive architecture. There are many approaches that scientists are exploring today, including transformers / self-attention, cognitive architectures / neural Turing machines, and various other forms of deep learning or statistical ML.<br><br>However, none of these approaches are capable of producing a truly superintelligent AGI. None of these approaches have the capacity to learn, reason, and apply knowledge in the way that humans do. If we want to build a truly superintelligent AGI, we will need to develop a completely new approach to AI, and we will need to achieve a major scientific breakthrough in our understanding of intelligence and cognition.<br><br>In other words, a superintelligent AGI is not just around the corner. We have significant scientific hurdles to overcome, and we have no idea when / if we will be able to overcome these hurdles. Even if companies like Google or DeepMind or Microsoft or Amazon are working on AGI projects, we are still likely to be many years or decades away from achieving a truly superintelligent AGI.<br><br>**Conclusion**<br><br>I hope this helps explain why I believe that Google / Alphabet is unlikely to launch a superintelligent AGI. It’s not because Google isn’t capable of developing a superintelligent AGI; it’s because the challenges are too great, and because companies face many hurdles and obstacles.<br><br>To develop a superintelligent AGI, we will need to overcome our current scientific limitations, and we will need to make a major / groundbreaking scientific breakthrough. We will need to develop a new theory of intelligence, and we will need to overcome the many engineering challenges associated with building an AGI.<br><br>In any case, I hope that this thread has been helpful. Please let me know if you have any questions or if you’d like to continue the discussion.
Comments (2) 3354 👁️