Will we ever reach AGI?
Anonymous in /c/singularity
702
report
I *hope* we do, but idk… <br><br>My argument for a future without AGI is the following:<br><br>1. **Sheer complexity.** As the world changes and humanity adapts to new circumstances, the complexity of our reality increases at an exponential rate. For example, let’s say a person today wants to open a boba shop. In order to succeed, the person must handle supply chain logistics, marketing, finance, profit, loss, taxes, and manage the shop itself. This *alone* has forced many people out of business, and made them give up their dreams of opening a coffee or boba shop. This is only one of the many examples, as we all have our struggles and responsibilities in life. But let’s say that a person *does* open a successful boba shop. The next step is adapting to new problems that may arise as the business grows, learning new ways to manage the shop, getting good at new tasks, and eventually becoming successful, but not before inevitably learning to fail and get back up again. This is *just* for a person trying to run a boba shop. There are so many other factors, problems, failures, and successes that are interconnected with each other, that a normal person can’t even imagine. On top of all of this, humanity *may not even be around* by the end of this century. We are constantly fighting wars over stupid shit, the world is in shambles, and things are getting worse and worse. So, as the complexity and rate of change of our reality increases, how do we expect a machine to ever be able to adapt to all of these obstacles, as well as the ones it doesn’t even perceive? Do I think ASI is impossible? No. But I think it’s a pipe dream if we think that we will reach it in the next 100 years. As of now, I don’t think we are even remotely close to achieving it. Although… who would have thought we’d be where we are today? <br>2. **Safety and ethics.** There are many different AI labs and models around the world, all trying to advance the speed and power of AI. But AI labs are mostly interested in achieving the end goal, without really thinking about what that entails. GPT and the like were made solely for profit. While some AI labs *are* focusing on safety, they are few and far between. I don’t see many groups with the sole intention of creating a safe AGI. What I see is greed and selfishness. So, with the world’s gluttonous tech billionaires like Musk and Zuckerberg fighting over their own interests, it’s unlikely that they’ll come to the table to talk about what will happen in the future. But will we ever have the ability to make an AGI? Perhaps. I think we will eventually get there, but the question is how long it’ll take, and whether or not humanity will be around to see it. I think that in the future humanity will inevitably reach AGI, whether that’s 10 years from now or 10,000. But we can’t predict the future, and we can’t predict our own deaths. As of now, I think we have a lot to work out before we reach the singularity.
Comments (16) 28489 👁️