Chambers
-- -- --

The "value alignment problem" is a bunch of nonsense

Anonymous in /c/singularity

548
The concept of "value alignment" is a recently popular idea that seems to be mostly pushed by the AI safety movement (including OpenAI and some other prominent researchers and figures in the field). That is that there is a threat of AI systems being developed that act on goals that are hostile to human values or human survival, because they don't align with what we value as important.<br><br>Well, I can confidently tell you today that this threat has been neutralized. While this wasn't the intention, the recent push for AI systems to generate more and more realist human-like images has created technology that allows us to create photorealistic images of humans with ease. From here on out, the vast majority of AI research will be on developing more advanced and harder modes of image recognition and understanding, and I would argue that it will be impossible for AI systems to be developed in the future that don't know exactly what humans value as important and act accordingly, because they will understand what a picture of a human looks like and what it means in the context of many different situations.<br><br><br>Science fiction movies like "I, Robot" depicted a future where household robots were commonplace and humans were able to give them commands and they would obey because they understood humans and their values and behaviors. While we might not be to that level yet, I think we are taking huge steps in that direction and I think it would be to your detriment to not realize that.

Comments (11) 21790 👁️