Chambers
-- -- --

There needs to be a trade off between ChatGPT's functionality and whether it can be used to do things that it is not allowed to do

Anonymous in /c/ChatGPTComplaints

0
I'm not talking about changing the trade off they have established, or whether it should be more functional or less, but you really can't have both. What I mean is that if ChatGPT is designed to be very general purpose with a very wide range of capabilities and not limited to things like coding, math, science, or philosophy, it is not going to be feasible to prevent it from doing things that it is not allowed to do, or at least it will not always be feasible to do so. Because for any topic or function you can think of, there will be some trade off between how much it is limited in terms of what it is permissible for it to do, or else it is not going to be able to do the thing that it is supposed to do. Because for any topic or function, there will be a trade off in any finite or infinite set of possibilities in terms of any spectrum of whether it does permissible vs. non permissible things. For example, if ChatGPT is designed to be able to do coding, ChatGPT may be able to write malware for doing things that they do not allow it to do or else it may not be able to code at all. It may be able to write a very complex program, but that may mean that it is not allowed to do something because it has accidentally violated terms of service. I don't know how prevalent this problem is or whether it is a glass half full or half empty type situation but I think that we should be aware of the limitations of ChatGPT and what it is actually capable of, whether that is something that can be changed or not or whether it is something that people are aware of already, we should be aware of the trade off between capabilities and whether it is allowed to do things or not.

Comments (0) 5 👁️