New York (CNN) — OpenAI has claimed that creating ChatGPT would have been impossible without using copyrighted works. LinkedIn is using user resumes to polish up its artificial intelligence model. And Snapchat says if you use a certain AI feature, it might put your face in an ad.
These days, people’s social media posts — not just what they write, even their images — are increasingly being used by companies for and with their AI systems, whether they realize it or not.
For companies running AI models, social media platforms offer valuable data. What’s written there is conversational, something AI chatbots consistently strive to be. Social media posts include human slang that might be useful for the tools to use themselves. And news feeds are generally a source of real-time happenings.
But users posting on those sites may not be so enthusiastic about their every random musing or vacationphoto or regrettable selfie being freely used to build technology (and, by extension, make money) for a multibillion-dollar corporation.
“Right now, there is a lot of fear being created around AI, some of it well-founded and some based in science fiction, so it’s on these platforms to be very open about how they will and won’t use our data to help alleviate some of the reactions that this type of news brings — which for me, it doesn’t feel like that has been done yet,” David Ogiste, the founder of marketing agency Nobody’s Cafe who regularly posts about branding and creativity on LinkedIn, said in a message to CNN. He added that he would opt out of allowing LinkedIn use his data for AI training.
Different social platforms vary in terms of the options they give users to opt-out of contributing to AI systems. But here’s the reality: If you’re posting content publicly online, there’s no way to absolutely be certain your images won’t be hoovered up by some third party for them to use in any way they like.
At the very least, it’s worth being aware that this is happening. Here’s where some of the major social media platforms may be using your data to train and run AI models, and how (and if) you can opt out.
LinkedIn this week began giving users the choice to opt-out of having their data used to train it generative AI models.
The company says user content may be used by LinkedIn and its “affiliates,” potentially including Microsoft partner OpenAI. It says that it aims to “redact or remove personal data” from training datasets.
To opt out, users should go to “Settings & Privacy,” select the “Data Privacy” tab in the lefthand column, and then click “Data for Generative AI Improvement” and toggle the button off.
The platform notes, however, that, “opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place.” That means, there’s no going back and undoing the training of earlier LinkedIn AI systems with user posts.
If you live in the United Kingdom, Switzerland or Europe — where privacy protections are more robust than other jurisdictions — you may not see the opt-out option, as LinkedIn says it does not train AI on user data from those areas.
X
Elon Musk’s X also requires users to opt out if they don’t want their posts used to train its AI chatbot, Grok, which has come under fire for things like spreading false information about the 2024 election and generating violent, graphic fake images of prominent politicians.
The platform says it and Musk’s xAI startup use people’s posts, as well as their conversations with Grok, to do things like improve its “ability to provide accurate, relevant, and engaging responses” and develop its “sense of humor and wit.” (X didn’t proactively notify users their data would be used this way; the policy update was identified by eagle-eyed users.)
X users can opt out by going to “Settings,” then “Privacy and Safety.” Under the “Data Sharing and Personalization” header, there is a tab for “Grok,” where users can uncheck the box allowing the platform to use their data for AI training. X also says that users who make their accounts private will not have their posts used to “train Grok’s underlying model or to generate responses to user queries.”
Snapchat
Snapchat’s “My Selfie” feature lets users and their friends turn their selfies into AI-generated images.
Those selfies can also be used by Snap (as well as brands that advertise on the platform) to create AI-generated advertisements featuring users’ faces if they use the feature, tech news site 404 Media first reported this week.
In its terms of service, Snapchat says users’ selfies shared via the feature will be used “to develop and improve machine learning models … and for research purposes.” It also says that by using the feature, users agree that they may see themselves depicted in ads “that will be visible only to you” without compensation.
But users also agree to allow much broader access to those images, too. According to the terms of service, “By using My Selfie, you grant Snap, our affiliates, other users of the Services, and our business partners an unrestricted, worldwide, royalty-free, irrevocable and perpetual right and license to use, create derivative works from, promote, exhibit, broadcast, syndicate, reproduce, distribute, synchronize, overlay graphics on, overlay auditory effects on, publicly perform, and publicly display all or any portion of generated images of you and your likeness derived from your My Selfie, in any form and in any and all media or distribution methods, now known or later developed, for commercial and non-commercial purposes.”
My Selfie is a feature that Snapchat users have to opt-in to create, so users won’t be defaulted into having all images they share with the platform used in this way. What’s more, users who have turned on My Selfie can go to “Settings,” then “My Account” and “My Selfie” and toggle off “See My Selfie in Ads” to avoid having their image used to create AI-generated sponsored content.
Reddit says that all users who share content publicly on the site grant it a free, worldwide license “to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats.” That includes letting third parties have access to users’ posts for AI training.
Reddit has inked major deals with Google and OpenAI to share platform data to train their AI models, as part of its effort to become profitable.
Redditors can’t opt out of having their public posts used in this way, but the platform says private content, such as private messages, posts in private communities, and browsing history, won’t be shared with third parties.
Meta
Meta leaders have acknowledged the company has already used public (but not private) posts from Facebook and Instagram to train its AI chatbot.
In its privacy policy, Meta says it may train its AI systems with users’ public Facebook and Instagram content, including posts, comments, audio and profile pictures. So, if you want to opt out, you have to make your account private.Meta also says private messages between family and friends are not used to train its AI.
Still, even if you don’t use any of Meta’s services, the company notes that it may use your information, such as a photo of you posted by a friend, to improve its technology.
The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.