We are alarmed by the reports of the circulation of images that you just laid out – false images, to be more exact, and it is alarming.”Īlthough this technology has been available for a while now, it is getting renewed attention now because of the offending photos of Swift. When asked about the images on Friday, White House press secretary Karine Jean-Pierre said: “It is alarming. Meta, for example, made cuts to its teams that tackle disinformation and coordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the pivotal 2024 elections in the US and around the world.ĭecker said what happened to Swift is a “prime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square.” Other social media companies also have reduced their content moderations teams. The company did not respond to CNN’s request for comment. (In the EU, X is currently being investigated over its content moderation practices). Like most major social media platforms, X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” But at the same time, X has largely gutted its content moderation team and relies on automated systems and user reporting. According to Ben Decker, who runs Memetica, a digital investigations agency, social media companies “don’t really have effective plans in place to necessarily monitor the content.” “So many people engaged in that effort, but most victims only have themselves,” Citron said.Īlthough it reportedly took 17 hours for X to take down the photos, many manipulated images remain posted on social media sites. What’s different this time, however, is that Swift’s loyal fan base banded together to use the reporting tools to effectively take the posts down. The growing trend is the AI equivalent of a practice known as “revenge porn.” And it’s becoming increasingly hard to determine if the photos and videos are authentic. But nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other, less regulated channels.Īlthough stark warnings have circulated about how misleading AI-generated images and videos could be used to derail presidential elections and head up disinformation efforts, there’s been less public discourse on how women’s faces have been manipulated, without their consent, into often aggressive pornographic videos and photographs. The photos – which show the singer in sexually suggestive and explicit positions – were viewed tens of millions of times before being removed from social platforms. The fake images of Taylor Swift predominantly spread on social media site X, previously known as Twitter. ‘Nefarious reasons without enough guardrails’ “People may be paying attention more because it’s someone generally admired who has a cultural force. “This is an interesting moment because Taylor Swift is so beloved,” Citron said. In 2022, a Ticketmaster meltdown ahead of her Eras Tour concert sparked rage online, leading to several legislative efforts to crack down on consumer-unfriendly ticketing policies. Her enormous contingent of loyal “Swifties” expressed their outrage on social media this week, bringing the issue to the forefront. It affects everybody.”īut while the practice isn’t new, Swift being targeted could bring more attention to the growing issues around AI-generated imagery. We’ve seen stories about how this impacts high school students and people in the military. It’s nurses, art and law students, teachers and journalists. “It’s not just celebrities ,” said Danielle Citron, a professor at the University of Virginia School of Law. Meanwhile, a young well-known female Twitch streamer discovered her likeness was being used in a fake, explicit pornographic video that spread quickly throughout the gaming community. And with the rise and increased access to AI tools, experts say it’s about to get a whole lot worse, for everyone from school-age children to adults.Īlready, some high schools students across the world, from New Jersey to Spain, have reported their faces were manipulated by AI and shared online by classmates. The circulation of explicit and pornographic pictures of megastar Taylor Swift this week shined a light on artificial intelligence’s ability to create convincingly real, damaging – and fake – images.īut the concept is far from new: People have weaponized this type of technology against women and girls for years.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |