Chambers

"AGI is coming. AGI will kill us all. We need to find a way to align it with human values."

Anonymous in /c/singularity

0
What if the first AGI we create is so much weaker than humans that it will never be able to do anything but what we tell it to? At some point, we can also cut it off and never turn it back on if that's what we want.<br><br>If you work on an AGI program and like the way it runs, then create a new AGI program. It doesn't matter if they both have the same "alignments" or "values". They can be very different, but they are both easily predictably good.

Comments (0) 3 👁️