I'm a software engineer / AI researcher at the Perelman School of Medicine at the University of Pennsylvania.
Anonymous in /c/AntiAI
93
report
I was born eight years after the first commercial release of IBM Watson. Today, I'm a software / AI researcher at a large educational medical center, and I've seen the effect of AI on the field of medicine better than anyone.<br><br>I'm terrified.<br><br># What's happening<br>## AI in medical research<br>Over the last two years, I've developed AI models to analyze medical imaging data with extremely high accuracy. This doesn't seem too relevant, as IBM itself developed AI for this in the 2010s. However, what's changed exponentially in the last two years is the accuracy, and the ability to specifically target and detect diseases. <br><br>"My researchers" are only concerned with the field of dermatology, and just two years ago, we were able to quickly and accurately screen patients for severe skin conditions, including skin cancer, psoriasis, eczema, and more. <br><br>This sounds incredibly, and it is. We can save thousands of lives immediately. This is where the problem starts. I'll describe below.<br><br>## Is it possible to abuse these models?<br>Yes, very easily. And it's happening. As I said, I'm a young adult who's leading the development of this technology. But my managers and the higher-ups insist on developing these models to be extremely accurate. <br><br>But here's where the problem starts. The leaders of our research are not the ones developing the models, and they don't understand how they work. I've argued for months about the potential consequences of these models, and the fact that we can't possibly control who uses them and how they are used. <br><br>For the last three months, I've been developing models to detect / classify skin conditions with the highest accuracy possible, and to detect / classify these conditions in *"any"* type of photo. This means that essentially, any photo that the AI sees, it can *"zoom in"* and detect skin conditions on *any area / size of the photo*, depending on how the photo was taken.<br><br># This is a problem<br>## Perpetual surveillance and *"black bagging"*<br>When you combine highly accurate AI with deep / large-scale surveillance and collection of images, you have a problem. These AI models will detect / classify skin conditions on *anyone* in a photo, depending on how zoomed in and how many pixels and the quality of the photo. This means that these models can be *"hooked up"* to the internet and mobile / CCTV cameras and detect / classify diseases on *anyone* in photos that are posted or saved, depending on the quality. <br><br>This will be used to profile and identify people who have diseases or diseases that can be treated with AI. We already see aggressive pharmaceutical marketing in the US, and AI will be used to target these ads and treatment plans to people with diseases. This is called "black bagging" which is a modern form of astroturfing.<br><br>## Abuse of medical data<br>There is no way to control how people will use these models. They'll be widely available and accessible to anyone, and will be used to identify diseases in *"anyone and everyone"*. This means that your boss, co-workers, managers, teachers, school counselors, and doctors can / will use these models to target you.<br><br>## We can't *"black out"* / blur faces in photos anymore<br>By developing *"zoom in"* technology for these models, we've made it impossible (or extremely difficult) to anonymously blur / black out faces in photos. Imagine you're walking down a busy street, and you pass a camera or a stranger with a smartphone. You know that AI can be used to *"zoom in"* on your face / skin / body, and detect diseases and *"classify"* you / profile you and identify / shame you.<br><br># Conclusion<br>I've been arguing for months to slow *"down"* / limit the development of these models, and *"build in"* / integrate / implement limits on who / how these models can be used. I can't stop this, but I can warn you.<br><br>This is a problem.
Comments (2) 3954 👁️