Dear listers,
LinkedIn has stirred up controversy by introducing a feature
allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency.
Many users were automatically opted in to this feature without prior notification, igniting fears over data misuse. The company has just updated the
privacy policy on its website to reflect the new changes, effective September 18, 2024.
According to
LinkedIn's FAQs, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate
opt-out form provided by LinkedIn.
The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions.
Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws.
Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority.
How to Opt-Out of Your Account Being Used for Training Generative AI
- While logged into your LinkedIn account, go to Settings & Privacy.
- Click on Data Privacy.
- Select Data for Generative AI Improvement and turn off the feature.
- To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn.
Best,
Jacinta Wothaya,
Digital Resilience Fellow @KICTANet, @
tatua