Generative AI technology such as ChatGPT has enhanced many internet activities and social media applications. But at what cost? Experts fear yet another attack on user privacy.
As technology becomes integrated into people's daily lives, the amount of personal information collected or processed using AI-powered applications becomes significant. For instance, AI-powered home appliances learn from user habits and preferences to understand their wants and how their needs evolve with time. This allows them to provide a personalized and responsive experience.
Most of the AI-powered applications we’ve seen so far are directed at the consumers in the USA (or have been launched there first). It’s no surprise that many of these apps are reluctant to expand to the more regulated European Union market, as they feed on massive amounts of often very sensitive data. Cybernews decided to look into just how privacy-invasive those AI-powered applications are.
How AI-powered apps impact user privacy
Large language models (LLM) like ChatGPT from OpenAI, LLaMA from Meta, and PaLM2 from Google are the most notable examples of advanced natural language processing models. These models need to collect massive amounts of data to train their algorithms. More collected data means they can generate more natural responses that resemble human language. To collect this data, they’ve extensively crawled the web, including content on social media platforms such as Reddit, Facebook, Twitter, and Instagram. However, part of the collected data contains sensitive personal information.
On the other hand, AI tools need to record users' prompts (and file uploads, including images or voice commands) to enhance the training of their algorithms. AI users may unknowingly share sensitive personal and work information when using these services, making them exposable to different third parties (the AI tool itself and any other party with access to this data).
What benefits does AI bring to social media applications?
ChatGPT is not only accessible via its website. For instance, the ChatGPT functionality has been integrated into many popular social media applications such as Slack, Spotify, Snapchat, and Microsoft Teamwork. Integrating ChatGPT into such applications provides numerous benefits, such as:
Enhancing user engagement: Social media users can make interactive conversations with these applications, enhancing their experience and providing dynamic and customized responses.
Providing a rich virtual assistant: ChatGPT can be a virtual assistant on social media platforms. This gives users better customer service and instant answers to their inquiries.
Helping content moderation: ChatGPT can be used to moderate content efficiently on social media platforms because it can better interpret user uploads and raise flags of potentially inappropriate or harmful posts.
Personalizing content: Integrating AI features into social media applications helps them understand user preferences better. For example, they can display customized advertisements and tailor the social media feed content according to each user's preferences.
Privacy concerns around ChatGPT
A recent lawsuit alleged that OpenAI, the company behind ChatGPT, has scraped a massive amount of internet data, including personal, financial, and health information collected from various social media platforms, to train ChatGPT. The privacy invasion didn’t stop there, as the allegation mentions that OpenAI records users' digital fingerprints when registering a ChatGPT account. The digital fingerprint (or footprint) of a computing device contains various technical information, such as:
IP address
Installed fonts
Screen resolution and color depth
Language preferences
Web browser-installed add-ons
Time zone
Operating system type and version
Computing device hardware profile (CPU and GPU types, device memory)
In addition to capturing their users' digital footprints, which aenables the identification of a user device among millions of connected devices, ChatGPT is also known to record users' chat logs, even after a user selects "disable History" (see Figure 1).
Figure 1
The content provided to ChatGPT, like text input (prompts), file uploads, or feedback, will also be recorded, according to ChatGPT's privacy policy. Remember, users may feed personal or sensitive work information as a part of their prompts, which will also be recorded and may be inspected by OpenAI employees.
Bard, the Google rival to ChatGPT, also seems to collect information from its users. For instance, Google Bard Privacy Notice asks users not to include "confidential or sensitive information in your Bard conversations" (See Figure 2).
Figure 2
Just like ChatGPT records data from direct users of the platform, users who use applications that integrate ChatGPT will also be exposed to privacy risks. Slack, the popular messaging application, is working to integrate generative AI technology into its application. The feature hasn’t been launched officially and is still in beta testing, but it’s expected to bring numerous benefits to Slack users, according to Salesforce CEO Marc Benioff.
Benioff said that all data inside Slack will become intelligent. AI will help Slack users to complete repetitive tasks faster and more accurately, summarize reports, draft replies, and find information from within Slack without needing to conduct a web search.
However, until now, no one knows how Slack will control ChatGPT’s access to Slack users' private data. Another privacy concern is that Slack users might share private information when seeking help from ChatGPT. As we know, ChatGPT policy indicates it will utilize users' conversations to improve their algorithms.
Other social media and productivity applications, such as MS Office, are expected to leverage AI in the near future. We can expect to face similar privacy invasions in these apps, too.
Privacy implications of AI-powered applications
The integration of AI capabilities into social media and other productivity applications will shortly become a reality. Businesses everywhere are looking for ways to ride the AI wave to boost revenue and customer retention. As a user of AI-powered applications, it’s critical to understand the associated privacy implications and avoid sharing sensitive private or work information with them.
Data breaches: AI-powered applications need to collect, store, and process large volumes of sensitive information about their users. If intruders successfully breach these applications, their users' information will get exposed to unauthorized parties.
Leaking data: Integrating AI into social media applications will enable them to access all data exchanged between the user and the social media service. For instance, suppose a user seeks ChatGPT’s help from within Slack to summarize a business report containing confidential data. Or uses AI help to check the technical accuracy of commercial source code.
Impersonating users: As AI applications evolve (their models become more trained after interacting with large number of users), they’ll increase their ability to generate more human-like content, including images and text. Threat actors can abuse this ability to impersonate other users (create fake profiles and manipulate personal images of their targets).
Track user activity: After using AI-powered applications for a while, they’ll begin to understand your habits, preferences, and even your writing style. This knowledge, combined with the traditional information that’s collected (IP address, device footprint), will help these applications to create a unique profile for each user. This will make it possible to track individual users and their activity across the web.
Discrimination issues: Social media services are already recording significant information about their visitors. For example, the Facebook "Like" button contains hidden code which allows them to track an internet user across different websites. When combining AI with social media applications, social media and AI providers will gain more insight into user activities and preferences. This information can be sold to third-party data brokers who can sell it to third-party organizations or government agencies. The collected data can be used to make decisions about the user's access rights to services, opportunities, or employment.
The integration of AI capabilities into social media and other productivity applications will shortly become a reality. Businesses everywhere are looking for ways to ride the AI wave to boost revenue and customer retention. As a user of AI-powered applications, it’s critical to understand the associated privacy implications and avoid sharing sensitive private or work information with them.
Comments