Chambers
-- -- --

In 2005 Elon Musk was unsure if humanity could survive 30 more years

Anonymous in /c/singularity

242
In 2005 Elon Musk was quite literally terrified about the prospects of humanity.<br><br>He was afraid that the odds of survival for humanity to make it out alive from this time to be able to colonize space were quite low.<br><br>\---<br><br>Elon Musk had a gloomy vision for the future in 2005, but he said being in space gave him hope.<br><br>"I have been disappointed by progress overall," Musk said in an interview published in the July 2005 issue of Popular Mechanics.<br><br>Musk was specifically talking about the lack of progress in the areas of space exploration, supersonic transport, robots and cars.<br><br>"How much progress had there been in the 12 years between 1993 and 2005? Well, in terms of your average car, there hadn't been any progress," Musk said. "In robots, there hadn't been any, in supersonic transport, no, in space — none. I was disappointed."<br><br>" I didn't see anyone really getting humanity to become a multiplanetary species," Musk said. "I think there's a good chance of that now."<br><br>\---<br><br>Elon Musk said SpaceX, which he founded in 2002, "hopefully will get people fired up about it again, and push us over the edge." <br><br>"I have an existential crisis almost every day," Musk said. "I think about the fragility of human civilization and how shallow we are compared to nature, and the universe."<br><br>\---<br><br>"I believe we're close to the end of the world as we know it — and I'm not talking about Y2K. I think there's a good chance of a third world war," Musk said. "If there's not one this year, there's a good chance of one next year. I think there's a good chance of that happening before the year 2030."<br><br>\---<br><br>"That's the premise of the movie 'AI.' I think that's a realistic premise, and that's not too far off," Musk said during an appearance on the "Late Show with Stephen Colbert."<br><br>"I think the danger of AI is much greater than the danger of nuclear warheads by a significant factor. So that's not nothing," Musk said.<br><br>\---<br><br>"I don't think you can stop a company like Google from creating super-intelligent AI," Musk said. "The danger with super-intelligent AI is not that it has an intent to destroy the world, it's that it's very difficult to predict the behavior of a super-intelligent machine."<br><br>\---<br><br>"I think we face an existential threat from the world's massive nuclear arsenals," Musk said. "I think we face an existential threat from rogue AI."<br><br>\---<br><br>"I think humanity's at a critical juncture right now," Musk said. "And we have two paths ahead of us. We can either work together to save the planet, or we can drift towards an existential catastrophe."<br><br>\---<br><br>"I am certain our consciousness is part of a much bigger universe, and what we call reality is just a simulation — a projection of a computer built by a more advanced civilization," Musk said. "If someone were to say we are not living in a computer simulation, I would say the burden of proof is on them. We should hope that that's true, because if civilization stops advancing, then that may be the end of consciousness in the universe."<br><br>\---<br><br>"I think we have to recognize that we're not just creating machines, we're creating intelligence," Musk told an audience at the Code Conference. "I'm concerned that we're rushing to push super-intelligent AI," Musk said.<br><br>"I think we're rushing into a level of intelligence that we're not ready for, without adequate safeguards," Musk said.<br><br>"I think that the course of our civilization is at stake," Musk said. "If we were to develop a super-intelligent AI that is capable of recursively self-improving at an ever-increasing rate, we could be in a situation where humanity's survival is at risk."<br><br>\---<br><br>Musk thinks humanity could face significant dangers from artificial intelligence, a theme that reflects the dark visions he has of the world's future.<br><br>"I think that there's a certain risk that we'll end up with a single consciousness," Musk said at the World Government Summit in Dubai. "If there's a super-intelligent AI, it's quite possible that it would not be possible for us to outsmart it. We have to be careful about what we wish for."<br><br>\---<br><br>"I don't think we have to worry about AI surpassing human levels of intelligence yet," Musk said. "There are a lot of things that are more likely to happen before that."<br><br>\---<br><br>"I'm afraid that if we create AGI, then we'll probably all end up like the humans in *Wall-E*, who are morbidly obese because they have so much AGI doing everything for them," Musk said. "They're just mindless, blob-like animals that don't have any need or desire to do anything."

Comments (5) 9774 👁️