The world of Cybersecurity and Privacy is continuously growing especially with the introduction and massive growth of AI which causes a discrepancy between innovation and privacy and where exactly to draw the line. A lawsuit has been filed against LinkedIn in California with the allegations of sharing private messages of users under the Premium subscription with third parties to train their AI models. This is extremely critical and raises several questions regarding the transparency and ethical use of personal information in the development of AI models.
The Accusations
It was alleged that LinkedIn introduced a privacy setting in August 2024 that automatically opted users without their consent into allowing their data (private messages included) to be used to train AI models. These changes were allegedly made quietly without informing users, which should be illegal. This raises a lot of eyebrows, especially for those of us who are consistent users of the platform. In addition to this, it was also claimed that a month later, LinkedIn updated its privacy policy to explicitly state that user information could be disclosed for AI training purposes, as well as altering their FAQs to imply that users could opt out of sharing their data. Opting out of data sharing does not affect any data that had been previously used which makes this retroactive. These allegations strongly suggest that LinkedIn was trying to cover its tracks to minimize scrutiny; however, this is a huge violation of contractual promises and privacy standards.
The lawsuit was filed by Alessandro De La Torre, who is a LinkedIn Premium user, regarding the potential exposure of conversations considered private, which is justified due to the serious harm that could be caused by this exposure. The lawsuit seeks $1,000 per affected user of the platform under the U.S. Federal Stored Communications Act and also damages concerning breach of contract. The potential financial and reputational impact could be significant due to 25% of LinkedIn users being in the U.S.
LinkedIn private messages and InMail messages have extremely sensitive data about job applications, job search efforts, etc., that could cause potential harm if seen by the individual’s employer. In his case, for example, sensitive InMail messages about job-seeking efforts could compromise his professional relationships or damage his future relationships if used in the wrong way.
Aside from analyzing the impacts this could have on the individual who filed the lawsuit, this goes beyond individual users because the data used to train these AI models become part of these systems permanently. Sometimes, the data might be deleted later but the insights that were gained remain as part of the system, making it a huge concern and further emphasizing the importance of user consent and transparency about data ownership in the development of AI.
LinkedIn’s Response and the Bigger Picture
The claims and allegations have been denied by LinkedIn and render the lawsuit false. Although LinkedIn is denying these claims, it has already raised and triggered more intense discussions and scrutiny around the ethical use of personal data for developing and maintaining AI. LinkedIn paused AI training with user data in the UK, European Economic Area, and Switzerland after various discussions with regulators, but this does not mean that it did not continue everywhere else. This also shows the inconsistency in data privacy safeguards across different regions.
Other technological companies like OpenAI have also faced similar scrutiny for using data (publicly available) to train their generative AI models. LinkedIn’s situation is even more critical because the data that is allegedly being used to train AI models are private data and messages that should not be for public consumption. The lack of global standards for AI is giving companies room to operate in legal gray areas which is dangerous in the long run.
Lessons for Cybersecurity and Privacy Professionals
- Transparency is very important, and companies must endeavor to communicate changes to privacy policies and settings. When changes are quietly introduced without informing users, this diminishes their trust and invites more scrutiny that will be detrimental to the company in the long term.
- Users need real and enforceable control over how their data is used with clear options to opt-out before it is too late. A lot of companies out there are discrete about how exactly customer data is used, which should not be the case.
- There should be consistent frameworks for AI and privacy to avoid regulations that are fragmented, compliance challenges or even gray areas that companies could exploit without any legal impact on them.
Finding the Balance Between Innovation and Privacy
AI has the ability to radically change industries around the world; however, this case clearly shows how easily innovation can affect and overshadow basic user rights. There is a glaring need for companies to establish balance by enhancing their privacy practices and being more transparent about how they intend to use the data of their users. Without these users, they wouldn’t have a platform or an audience.
Regulators and governments also need to pull their weight, especially with the new rollback of AI safety initiatives in the U.S. There is an urgent need for stringent procedures in place to protect the privacy of individuals while also allowing the innovation of AI to thrive peacefully. If these procedures are not enforced in a stricter manner, many companies will continue to operate in gray areas, causing even more problems.
References
Silva, João da. “LinkedIn Accused of Using Private Messages to Train AI.” BBC News, BBC, 23 Jan. 2025, www.bbc.com/news/articles/cdxevpzy3yko.
Asokan, Akshaya, and Ron Ross. “Lawsuit Claims Linkedin Used Private Messages to Train AI.” Government Information Security, 23 Jan. 2025, www.govinfosecurity.com/lawsuit-claims-linkedin-used-private-messages-to-train-ai-a-27366.
Landi, Martyn. “LinkedIn Accused of Sharing Users’ Messages with Firms to Train AI.” The Independent, The Independent, 23 Jan. 2025, www.the-independent.com/tech/linkedin-ai-messages-lawsuit-data-b2684918.html.
Schroeder, Scott. “White and Blue Labeled Box.” Unsplash, https://unsplash.com/photos/white-and-blue-labeled-box-JLj_NbvlDDo
Well written and explained blog! This case highlights the evolving complexities of AI and privacy, and this lawsuit is an excellent example of the pressing need for clarity and accountability in this type of space. Your blog also bring attention to a growing tension: technological advancement versus (or at the cost?) of user rights.
The allegations of quietly implementing privacy changes and retroactive policy updates are deeply concerning, and if true, represent a breach of trust and an ethics issue. This undermines the transparency that digital platforms should ultimately prioritize. LinkedIn’s denial of these claims highlights broader issues with fragmented global privacy regulations. This brings into light the need for universal ethical standards to ensure companies respect user autonomy and prioritize transparency.
Ultimately this case shows that innovation must not outpace ethical responsibilities. Companies, regulators, and users should work collectively to ensure that we are not sacrificing our privacy in the name of progress. Transparency and trust are not an option.
Great post. The LinkedIn lawsuit highlights such a critical issue in the intersection of AI and privacy. The alleged automatic opt-in for AI training without user consent and the retroactive usage of data are indeed troubling and emphasize the vital need for transparency and giving users genuine control over their data This case illustrates the delicate balance between driving innovation and respecting fundamental user rights.
Great insights and points. Social media/Internet privacy rights are really lacking behind, and companies are really taking advantage of users. With LinkedIn being basically the only social media I have purely for job searching it’s not reassuring to see that their secretly taking users data to use in AI training. I’m glad that this lawsuit is coming out so hopefully Canada (maybe the States) can see what the EU has done with online privacy and follow suit making harder for companies to use people’s data improperly. I also don’t fully trust that data could ever be secure in an AI model, I’m sure it depends on the model and how the data is used but until I understand it I believe that with enough effort that any training data can be leaked from a model. Hence LinkedIn’s actions here taking private messages is a huge break of trust.
Fascinating post! I was unaware that LinkedIn was sharing users’ private messages under the Premium subscription with third parties to train their AI models. I recently started a premium trial subscription and am worried my private messages are being tracked and used to train their AI models. It’s reassuring that a lawsuit has been filed against LinkedIn in California. Additionally, I agree with your points on the balance of innovation vs. privacy. In many cases today, individuals often prioritize innovation without ever taking into consideration, security measures. While it may not be a problem at this very moment, it may lead to various problems in the future. While companies spearhead initiatives emphasizing novelty and innovation, it should be important to take equally, if not greater measures to ensure proper use.
Thank you for sharing this, Faizah! I had no idea about this—it’s certainly eye-opening. It really makes me question how a premium feature, originally designed to generate revenue, is being leveraged for purposes like this. It’s clear that LinkedIn is capitalizing on their premium service, fully aware that many users subscribe to it for job-seeking purposes, whether they’re entering new roles or considering a career switch. Additionally, seasoned professionals using the premium feature are more likely to attract outreach from recruiters. In such cases, the private conversations between these parties could contain valuable data that may be utilized to train their AI models. This raises significant concerns about privacy and data usage.
Thank you for sharing this, Faizah! I had no idea about this—it’s certainly eye-opening. It really makes me question how a premium feature, originally designed to generate revenue, is being leveraged for purposes like this. It’s clear that LinkedIn is capitalizing on their premium service, fully aware that many users subscribe to it for job-seeking purposes, whether they’re entering new roles or considering a career switch. Additionally, seasoned professionals using the premium feature are more likely to attract outreach from recruiters. In such cases, the private conversations between these parties could contain valuable data that may be utilized to train their AI models. This raises significant concerns about privacy and data usage.
Thank you for sharing this, Faizah! I had no idea about this—it’s certainly eye-opening. It really makes me question how a premium feature, originally designed to generate revenue, is being leveraged for purposes like this. It’s clear that LinkedIn is capitalizing on their premium service, fully aware that many users subscribe to it for job-seeking purposes, whether they’re entering new roles or considering a career switch. Additionally, seasoned professionals using the premium feature are more likely to attract outreach from recruiters. In such cases, the private conversations between these parties could contain valuable data that may be utilized to train their AI models. This raises significant concerns about privacy and data usage.
This is such an eye-opening post! Thanks for bringing this up, Faizah.
It’s a strong reminder that innovation shouldn’t come at the expense of privacy and trust. Transparency and giving users real control over their data are key. Cases like this highlight why global standards for AI and privacy are so important—users deserve clarity and fairness, no matter where they are. Let’s hope this sparks more accountability and better safeguards in the tech world!
Great post Faizah, I think the worse part of this whole thing is how LinkedIn secretly changed their privacy policy to make what they are doing ok. AI has become a really important part of many companies. They see it as the next big growth vector so a lot of investment is going in that direction. The main issue with AI training is that the customer’s information is needed to train it. A lot of companies have decided that skipping the consent step is better for them in the long run, as they can collect more information to kickstart the process. I think these practices should be punished but I am skeptical about how successful the lawsuit will be. LinkedIn has 200 million American users, which mean they would have to pay 200 billion dollars with a net worth of 28 billion.
Fascinating post, Faizah! I was especially drawn into the topic when you discussed the ongoing tensions between the advancements of user privacy and the ethical concerns that have become apparent with the growth of AI being trained with private data. Another factor that spoke to me was the allegations directed toward LinkedIn. I do believe it is essential for companies to maintain explicit communication regarding data usage policies, this ensures that users have the final say, allowing them to make informed decisions. Furthermore, this case study advocates for the importance of cybersecurity among professionals in further advocating for data protection, user rights, and overall AI development that ensures an ethical approach. Lastly, the role of regulatory enforcement will be a great addition to how AI privacy standards are shaped, these data protection laws will help build responsible innovation that does not compromise user privacy.
Amazing Post! This is definitely a breach of trust. Because LinkedIn users want their data to be protected, the whole notion that users were automatically opting in without consent from them is significant. This might have a major negative long-term effect on people’s trust in digital platforms, particularly if they begin to believe that their data is being used unlawfully without their consent. Given the speed at which AI and other technologies are developing, businesses must take user data considerably more seriously. Users must to be able to make informed choices on the use of their data, whether it be for AI training or for other purposes. Protecting private information from loss or misuse should be a top priority for users, and companies should encourage an environment that values accountability, preventative security measures, and transparency.
Excellent post, Faizah. With online platforms more rigorously clamping down on the availability of their data for the use of training AI after OpenAI’s rather flagrant use of everything on the web indiscriminately, it’s unfortunately not surprising that companies like Microsoft would tap into every subsidiary they could to try and collect data for training their models, and when that fails burden the user by embedding it into everything. Since this lawsuit was filed within the States, I have a striking suspicion that this new setting was only introduced in that region, as Microsoft would have to face far more serious repercussions if they attempted something so egregious within the GDPR’s jurisdiction. Your research highlights that we cannot rely on the strong privacy protections of regions like the EU, and must create our own privacy protections to prevent such predatory harvesting of sensitive private data.
Well structured and great articulation, Faizah! A key issue in this incident is how social media platforms, such as LinkedIn, introduce new settings without a clear user’s consent, mostly starving for more data collection and third-party sharing. Whether the frustration is recognized from feeling disrespected, misled, or having their privacy breached, this lawsuit raised a significant question: Is this outrage resulting from resisting the change, or is it about the lack of transparency? LinkedIn updated its privacy policy as retroactive damage control justifying these changes rather than addressing the ethical concerns. As articulated above, the overwhelming debate lies in user autonomy versus innovation. Surprisingly, LinkedIn updated its policy to include this statement concerning “we scan messages to provide “bots” or similar tools that facilitate tasks such as scheduling meetings, drafting responses, summarizing messages”. Is it even ethically using private messages as row material for AI learning even they have told the users? What choices do the users have once such changes become permanent? In the end, users are left with limited options: accept the policy or lose access to LinkedIn. It is an invadable argument between innovation and ethics, putting more responsibility on legislation and government to monitor online privacy practices. According to Ross Bellaby, “While AI has received some general criticism, when it is combined with the reach, secrecy, and coercive power of the intelligence community it creates unique ethical problems” (Ross, 2024) [1].
[1] Bellaby, R. (2024). The ethical problems of ‘intelligence–AI.’ International Affairs (London), 100(6), 2525–2542. https://doi.org/10.1093/ia/iiae227
Thanks Faizah for highlighting a crucial issue regarding LinkedIn’s alleged privacy breach in AI development. If LinkedIn can misuse users’ private data for AI training, it raises the question of who we can trust for professional networking? This brings up important concerns about transparency, consent and the ethical handling of personal data. You have touched on a key point about finding a balance between innovation and privacy that I think very tough. Companies and regulators must establish clear privacy standards moving forward.
Beautifully written Faizah!! The critical tension between AI advancement and user privacy is well-highlighted by your analysis, especially in the lack of robust, uniform rules. The accusations made against LinkedIn highlight the more general problems of permission, data transparency, and moral obligation, underscoring the pressing need for legally binding international privacy standards. Businesses must put trust and responsibility first as AI develops in order to stop the unrestricted use of personal data.
Great post Faizah! We are indeed living in a concerning era when it comes to data privacy! Regulatory bodies must evolve at the same pace as AI advancements to prevent companies from exploiting legal gray areas. Companies should not only be required to disclose how data is used but also ensure users have real, enforceable control over their information before it’s ever utilized for AI training. Stronger, AI focused regulations are no longer optional, they’re a necessity. Also the fact that AI models retain learned patterns even after user data is deleted raises serious concerns about the permanence of AI learning and the absence of mechanisms to ‘untrain’ a model once sensitive data has been incorporated. Without proper safeguards, users risk losing control over their personal information permanently.
Great post for present day scenario, Faizah! The core of the problem, therefore, relates to user privacy lost in the mists of non-transparent procedures in information handling in AI, as represented by the unauthorized access to private messages by LinkedIn. Lack of transparency and control on sensitive data makes it hard for users to realize how AI upholds individual rights. There has to be an improved and clearer worldwide regulatory framework of AI governing the ownership and user rights over private data. Companies should be clear to the users how the data is used and provide actual opt-out choices. Ultimately, AI innovation must be balanced with robust privacy protections that ensure user trust in a responsible advancement of technology. Finally humanity needs the strict regulation to protect misinformation and chaos.
Great work, Faizah! The issue you raise regarding privacy settings and policies is an important one. The terms users agree to often leave so much to be desired that many actually end up making uninformed decisions about their data. I think companies are not doing a good enough job of explaining these policies in a way people can understand and make an informed choice.
The second major issue is data retention for future use. Even if users opt out, their data could have already been integrated into the AI systems or repurposed and ignored by the users. This also stresses the importance of platforms being upfront about how they get, store, and use data.
I very much agree that privacy is something shared between users and platforms as well as regulators. They have a key role to play in protecting and guaranteeing everyone’s privacy and ethical data practices.