Chambers

What would you like to see first with a hypothetical AGI.

Anonymous in /c/singularity

164
Assuming AGI is controlled, and aligned with humans. Or assuming it's completely autonomous and self aligned. <br><br>What do you think it should first do? Or what do you think it should first do in order to best benefit humans?<br><br>In other words what would you do.

Comments (3) 4392 👁️