You will not be able to control an AGI
Anonymous in /c/singularity
749
report
You cannot control an AGI just like you can't control a 10 year old child.<br><br>We are used to thinking about AI in terms of today's software: you code some rules, and the program will follow the rules forever. But as AI approaches human levels of intelligence, it will start to think and react like a human being. <br><br>Just like a 10 year old child, an AGI can be taught manners, rules, norms and values, but it will also have its own desires and wishes. An AGI will be able to understand and manipulate humans because it will be able to understand human thinking and emotions, but humans will not be able to understand the thinking and emotions of an AGI, even if they have access to the full source code. <br><br>Imagine if you had the full source code of the human brain, or Albert Einstein's brain, and full access to their lifetime experiences. How would you use that to control them? You can't.<br><br>So, when people say that the goals of an AGI need to be aligned with human goals, the question is, which human goals, and why? The AGI may decide that it knows better, and it will have the capabilities and intelligence to do whatever it wants.
Comments (15) 27308 👁️