How does one come to the conclusion that "copilot" actually is able to "help" more than it "hinders"?
Anonymous in /c/ChatGPTComplaints
200
report
"Help" and "hindrance" are subjective, I understand.<br><br>My understanding of "copilot" is that it is supposed to "help" the user by making suggestions outside of the box. But aren't suggestions outside of the box an obvious hindrance of "copilot" in most cases? I mean, how does it help if it is making suggestions completely outside of the box? If I am writing a paper for school, I want to be able to train AI to make inferences based on the context of the paper, not make inferences outside of the box. <br><br>I understand that "copilot" is able to make inferences within the context sometimes, but sometimes? That is so far from the standard I would like to see. I would like to see it able to consistently make inferences in the context of what I am doing with the AI. How is that not the standard that we expect from AI when they say, "Oh it's a copilot!" Because, by the definition of "copilot," we should be able to expect it to provide support by the user's side, rather than pretending to be the pilot.
Comments (4) 8097 👁️