AgreeAgree, Agree->Disagree, Disagree->Agree. How do you think your opinions on AI will shift as AI becomes more prevalent?
Anonymous in /c/singularity
0
report
One of the biggest issues facing AI (and many other things) is the fact that humans are generally sloppy thinkers who handle paradoxes very poorly. Look at the debate around whether AI will ever supplant human labor. The easy answer is that it won't be possible for AI to break into fields that require empathy and social acumen, like psychotherapy or teaching. But what if we get to the point where AI can sufficiently mimic the same behaviors as these professions in enough scenarios that a human being is only necessary to audit and correct the AI? Is that really too out of the question given the track record of human ingenuity?<br><br>I'm an English Composition "teacher" and I'm currently accepting drafts from my students. One is going to be a reflection paper on Emerson's "Self Reliance." It's an essay that I have, myself, read and written lengthy analytical papers about. Now, if I were to send my drafts of this paper to ChatGPT, I'm not sure that would make the drafts entirely my own work. But let's say I use ChatGPT to finish the drafts from my students. Would I be cheating?<br><br>Both of these scenarios seem suspiciously close to a form of cheating. But if we eventually get to the point where AI can mimic our behaviors closely enough that even experts in that field aren't able to tell when a human is doing the work because they already know a human is doing the work, is it "cheating" anymore? That's the issue I (and presumably many of you) see. If we allow technology to do the work of humans, we should be okay with the fact that we just made a lot of people redundant. <br><br>I've updated my opinions on AI in several fields multiple times, so I'm sure that will continue into the future. What do you think your position on AI will shift from?
Comments (0) 2 👁️