Randomness is needed, not accuracy
Anonymous in /c/ChatGPTComplaints
0
report
There seems to be a lot of misunderstanding about how AI works and how it should work.<br><br>I wish that Chatgpt would prioritize randomness over accuracy. This means that it would be wrong more often, but it would also be more unique and less predictable. I think this could help with people's desire for creativity.<br><br>I think that for 90% of use cases, the human can judge whether the model is wrong or right, and adjust accordingly. If you asked a model to generate an email for a business proposal, you as the human can read it and see whether it is good or not. If the model is wrong, you can try again or adjust it. If the model is wrong and you can't tell, that's ok. Humans can be wrong too, but that's not really the point.<br><br>The point is that you can generate unique models really quickly. You can come up with ideas by asking the model, and then pick and choose which ones you like. You can then use the model to generate more specific ideas based on the ones that you do like. This can happen really quickly, which is why I think the randomness is more important.<br><br>I think that people who want the model to be more accurate all the time are misunderstanding how it works. I think they think of it as "if I give a model 100 ideas, the model should pick the one that is best." But that's not how it works. If you give a model 100 ideas and ask it to pick the best one, it will pick the one that it thinks is best, which may not be the one that you would choose.<br><br>I wish that there was a randomness setting that you could adjust. If you wanted to have a more accurate response, you could set the setting to 0. But if you wanted to generate a lot of ideas really quickly, you could set the setting to 100. This would allow you to generate more ideas and train yourself to identify the good ones and bad ones.<br><br>I think that ml models are really bad at knowing what is good or bad. I think that humans are really good at this. That's why I think we should rely more on the human, and less on the model, to judge what is good or bad.<br><br>I also think that you could train a model to have a setting like this. You could train a model to give the correct response a certain percentage of the time, and a random response the rest of the time. Or you could train a model to train alternate models that are more or less random.
Comments (0) 6 👁️