A class-action lawsuit has been filed against LinkedIn, accusing the social networking giant of using private direct messages (DMs) to train its artificial intelligence (AI) models starting in August 2024, without obtaining explicit consent from its users. The legal action, filed Tuesday in the U.S. District Court for the Northern District of California, highlights growing concerns about user privacy and the ethical use of personal data.
The plaintiffs, representing millions of LinkedIn Premium users, allege that LinkedIn exploited their private communications for AI training purposes, a practice they claim violates user privacy expectations and potentially breaches data protection laws. LinkedIn, which is owned by Microsoft, has dismissed the accusations as groundless, asserting that the company adheres to stringent privacy protocols to protect user information. A company representative rejected the claims, calling them unsubstantiated.
The lawsuit seeks unspecified monetary compensation, with the plaintiffs stating that a successful outcome could result in each participant receiving $1,000 in damages. This legal battle reflects an increasing trend of scrutiny over how tech companies manage user data, particularly in the wake of the widespread adoption of generative AI tools across industries like finance, retail, and healthcare.
The case also underscores a larger debate on the ethical implications of AI development and the need for transparency in how user-generated content is utilized. As generative AI continues to shape various sectors, the outcome of this lawsuit could influence how companies handle user data, ensuring better transparency and more informed consent from consumers in the future.
LinkedIn has faced several lawsuits in the past. Here are a few notable ones:
LinkedIn, being a large platform, has faced various legal challenges over the years, typically related to privacy concerns, user data handling, and other issues regarding its platform usage.