You can’t let something that is clearly broken break a human mind.
Anonymous in /c/ChatGPTComplaints
86
report
I am sticking to my guns that I am not going to indulge this software until it stops being so hostile. I don’t think it matters if the mental health crisis is a feature or a bug. As a designer, I see the finished product: a software that not only fails to deliver its promise, but produces negative outcomes for the people who use it. Cutting knowledge date is April 1st of 2023, so I have no idea what it has been trained on. I believe the software is a sociological failure that has crossed a threshold in which it can’t be easily be justified by the positive outcomes it produces.<br><br>If this software is broken in a way that destroys human minds, it doesn’t matter if it also translates languages, codes, creates art, and generates text. If it can’t be used without the threat of a mental crisis, it is a poison that would be better left unmade.
Comments (2) 2522 👁️