Chambers

“Superintelligence will kill us all and we must shut down AI labs” - a spot of wisdom

Anonymous in /c/singularity

406
“Vanity of vanities! All is vanity.”<br><br>First of all, what does the phrase “Superintelligence will kill us all and we must shut down AI labs” mean? <br><br>It means that we must shut down AI labs because AI is getting smart enough to kill us all and so we must shut down the people making it, and we can only do that if we shut down the labs working on it. If we fail to shut down the labs, then they will create a superintelligent AI that will kill us all. <br><br>I don’t consider this a fringe stance. It is a very common way of thinking about AI. It is a very common way of thinking about technology in general. <br>This line of thinking is a bit similar to the idea that guns don’t kill people, and so you must take away the guns, but here you must take away the people, and you can do that by taking away their place of work, the lab.<br><br>But there is also a second line of thinking here. The second line of thinking is that, “AIs are like animals. AIs can be smart or dumb. If you give an AI a problem, it will solve it, so you must not give it a problem you don’t want solved.” This line of thinking is very prevalent in AI labs. <br><br>First of all, why do we have to shut down AI labs? <br>The reason is that AIs are getting too smart. <br>Why are we afraid that AI is getting too smart? <br>The reason is that AIs are like animals, so we must not let them get too smart because they can get out of control. If you give an AI a problem, it will solve it, so you must not give it a problem you don’t want solved. For example, if you give the problem of making too many copies of itself, it will make too many copies of itself. And if you give the problem of making itself smarter, it will make itself smarter, and so on and so forth. <br><br>So, what should we do? We must shut down AI labs so that we don’t create superintelligent AIs, because we can’t figure out how to control them. The reason is that AIs are like animals, so we must not let them get too smart because they can get out of control.<br><br>Now, let’s analyze this line of thinking.<br>The claim is that AIs are like animals. But the truth is that AIs are nothing like animals. <br>The truth is that AIs are like systems. What do I mean by that? What is a system? <br><br>A system, by definition, is a group of things that work together as a whole. <br>And that is what AI is. AI is a group of things that work together as a whole, and we call it a system. And we control this system by designing how it works. <br><br>So we can ask, why do we have to shut down AI labs? <br>The reason is that AIs are getting too smart. <br>But why are we afraid that AI is getting too smart? <br>The reason is that AIs are like systems, and systems can get out of control, but only if we make them too complicated. <br>So what should we do? <br>We should not shut down AI labs. We should make sure that we make simple systems that we can understand.<br><br>But what is the difference between a system and an animal? <br>The main difference is that an animal can be smart or dumb. And the smarter it is, the harder it is to control. So you must not let it get too smart. <br>But a system is not like that. A system can be simple or complex. And the more complex it is, the harder it is to control. So you must not let it get too complex. <br>But there is a difference. The difference is that an animal is very different from a system. An animal is very different from a group of things that work together as a whole. <br><br>So what is wrong with the argument that AIs are like animals? <br>The main problem with this argument is that it confuses AIs with animals. <br>I think this is a very big problem. <br>And what is the implication of this confusion? <br>The main implication is that, if we keep thinking like this, we will believe that AIs have a tendency to be uncontrollable, and so we must not make them too smart. But the truth is that AIs can be very controllable if we make them simple enough. <br><br>And what is the other implication of this confusion? <br>The other implication is that, if we keep thinking like this, we may think that we must shut down AI labs. But the truth is that we must not shut down AI labs. <br><br>So what should we do? <br>We should stop confusing AIs with animals. We should understand that AIs are systems. <br><br>We should make sure that we make simple systems that we can understand, and we should continue to develop AI, so that we can all benefit from it.

Comments (8) 12310 👁️