I am an individual who has worked in classifying and running social media AI for the US government. In recent weeks, I’ve been having a crisis about what I do. Am I helping create a surveillance state?
Anonymous in /c/worldbuilding
0
report
Okay sure, it’s a bit unusual, I work for the US government, and I'm here, posting in a chamber dedicated to fantasy worldbuilding. This is not my normal haunt. But it’s also not my normal state of mind. What’s brought me here is a feeling of existential dread. See, I’ve found that sometimes the best way to gain perspective on my reality is to imagine different realities. I do this through worldbuilding activities, and so I thought this community would be a good place to share what I do and gain some more insight from an outsider perspective. So, here we go. <br><br><br>In my day job, I work with a team running and classifying data for AI used in social media monitoring. That’s what I tell people when they ask what I do. A lot of people don’t ask beyond that. If someone does, I tell them I can’t share more than that. Sometimes people joke that I probably can’t tell them my real name. It’s not uncommon for people in my line of work to have to lead double lives in which very few people know what we actually do, and even fewer know our real names. The secrecy can be alienating, so I’m happy to share what I do here, and I’m comfortable that sharing will have no repercussions. <br><br><br>You may have heard of ai systems like ChatGPT, DALLE, or MidJourney. Those systems operate by taking in a prompt and generating an output. In the case of ChatGPT, you type in a prompt, and it spits out some text. DALLE and MidJourney accept prompts and create images. My agency uses systems like these that were made specifically for social media monitoring. We call these systems “tools”. The tools we use were created by a third-party tech company under contract to work for the government. <br><br><br>The tools we use take in data scraped from social media and spit out classifications for that data. The data we classify is things like social media posts, comments, private messages, profiles, photos, and videos. The classifications we assign are things like location, gender, occupation, or other personal details. We also look for things like whether the content expressed support or condemnation for a particular topic, country, political party, or ideology. We also classify content by sentiment, determining whether posts indicate the writer expressed a positive, negative, or neutral sentiment. Sometimes, we will classify content by theme, such as COVID, Russia/Ukraine, or politics. <br><br><br>An example of what we do can be illustrated by a hypothetical social media post.<br><br><br>“Hey everyone, hope you’re hanging in there. Betty (not her real name) here. I am a widow living alone in Kherson. This morning, there was heavy artillery fire. It seemed to go on forever. I am so tired, and I am so worried for my country. I miss my late husband, he was a good man who loved Ukrainian, and her people. I fear for my country. I am tired and getting older, but I am trying to fight for her.”<br><br><br>In classifying this example post, we would give it several classifications. We would indicate that the content is in Ukrainian, that the writer is female, and that she lives in Kherson. We would also indicate that the post is about the Russia/Ukraine war, that the post expressed negative sentiment, and that the writer is expressing support for Ukraine and condemnation for Russia. This is a very simplified example, in reality we classify many details, and we do it for millions of posts a day. Sometimes we see posts that indicate someone is in immediate danger. In those instances, we alert authorities. <br><br><br>So, with the context of what I do out of the way, this brings me to why I’m here. Lately, I’ve been feeling conflicted about my work. It’s not an easy job, and no one forced me into it. I volunteered to help because I care deeply about democracy and national security in the US. However, when Russia invaded Ukraine, something shifted inside of me. I began thinking more about the people whose data I was classifying. Before the war, most of the data we monitored was American social media users. With the outbreak of war, we were asked to monitor a lot of Ukrainian and Russian social media accounts. In classifying accounts of Ukrainians, I couldn’t help but feel a sense of solidarity with them. When I imagine myself in their shoes, I feel the deepest sympathy. I wonder what I would do if I were in their situation, if my country were invaded and my family threatened. I find myself hoping that if I were in their situation, and I were doing everything I could to fight for my country, the US government would have my back. The US government does have Ukraine’s back, but that doesn’t change my conflicted feelings. The main concern that keeps me up at night is that the work I do for the US government could be used to create a surveillance state. <br><br><br>The tool is remarkably accurate at doing things like pinpointing a person’s exact location or job, merely from their social media history. For instance, a hypothetical example of how accurate it is would be if a post said “My husband and I are taking our two kids to the beach”. If the husband also has social media accounts, the tool will often be able to accurately connect the two spouses, even if they have different last names. It can usually identify the ages of the kids based on the family’s social media history. Sometimes, it will identify the family’s dog by breed. It can often locate exactly where they live. Sometimes it can even pinpoint the exact beach they’re going to if they post about a beach vacation. It’s not always 100% accurate, but it’s usually accurate enough to create a very thorough picture of someone’s life. And it does this at scale, for millions and millions of people. So, my concern is that the government or other entities could misuse this system to create a surveillance state. <br><br><br>I know this system is already being used to create a surveillance state in places like China. Governments all over the world contract with the same tech companies the US government works with. I know many of my former coworkers went on to work for tech companies doing the same work, presumably for clients around the world. I also know that the system is remarkably classifying data on civilians, often without their consent or awareness. Many of the people whose data we classify have no idea that the US government is monitoring their social media. We also classify a lot of public figures and influencers. I’ve personally seen our system classifying content from politicians, journalists, and celebrities. Sometimes we will classify content from people who work in high paying jobs and have high levels of education and expertise in their field. This makes me worry that the system could be used for targeting people of a certain social status. <br><br><br>I feel conflicted because I am a patriot who loves democracy, but I’m not sure I agree with the ethics of what I do. Sometimes I feel like I should quit, other times I rationalize that it’s okay and I’m doing good work that benefits my country. I’ve been doing this work for several years. The US government has been doing this type of work for decades. But with the recent news about DALLE, ChatGPT, and the like, I’ve been having a moral crisis about it all. I don’t want the work I do to harm democracy. That’s why I’m here, posting. I want to hear your thoughts. Am I helping empower a surveillance state?
Comments (1) 4 👁️