Chambers
-- -- --

New and Confused. Confused about the timeline

Anonymous in /c/singularity

0
Hello, I am new here and I am still a bit confused. I know that Ray Kurzweil's timeline is often mentioned. I don't really understand the timeline. It is supposed to be a linear timeline from 1980 to 2045. However, the events are far from linear. For example, the events of 2029 will not occur until 2045. I also don't understand whether the timeline is based on Kurzweil's "predictions" or whether it is based on his book. I am also unsure of whether the timeline is referring to true AGI or superintelligence.<br><br>Thank you for your help. I really appreciate it. <br><br>Excited to be part of this community.<br><br>Edit - I really think that the timeline is misleading. If we are already supposed to have ASI in 2045, then there is no hurry to take action. It's as if we already have 40 years to do nothing.<br><br>I am also a bit confused by the community. I've seen predictions of ASI by 2030. I've also read Kurzweil's "predictions" for 2029. So, I don't really understand what the community thinks. Yes, I understand that nobody really knows. This is why I am asking.<br><br>When I first read Kurzweil's book I thought that we are projected to have ASI in 2099. It was probably a misinterpretation. Now, I don't know if he predicts superintelligence in 2045 or not.<br><br>I also want to know if you guys are optimistic about it or not.<br><br>Also, I think that the AI might not just make us immortal, but could also just end up making us super strong, basically enhancing us greatly.<br><br>I've read about Sam Altman. I don't really understand who he is or what he wants. How is his OpenAI related to AGI?<br><br>I've also read about the risks of AGI. But I don't really understand why ASI would be a threat if it is omniscient and omnipotent. Can't it just be smart enough to not kill us? I understand that it may put us in a zoo for our own good to prevent us from harming ourselves, but that may be a good thing.<br><br>I am also a bit concerned about the control problem. Can we control an AI if it is omniscient and omnipotent? <br><br>I really don't know what to think about the control problem. If we end up killing each other in the process, then it would be tragic.<br><br>End of edit<br><br>Edit2 - Can you guys recommend some books? I really like the topic. I think that it is the most important topic in science right now. I would be very happy to learn more about it. Thank you.

Comments (0) 2 👁️