LinkedIn’s New Generative AI Feature: What You Need to Know
Dear listers, LinkedIn has stirred up controversy by introducing a feature allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models <https://www.linkedin.com/help/linkedin/answer/a6278444>. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency. *Many users were automatically opted in to this feature without prior notification*, igniting fears over data misuse. The company has just updated the privacy policy on its website <https://www.linkedin.com/legal/privacy-policy#use> to reflect the new changes, effective September 18, 2024. According to LinkedIn's FAQs <https://www.linkedin.com/help/linkedin/answer/a5538339>, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate opt-out form provided by LinkedIn <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638563093009141369%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SrajaO%2FtZlu4tb43TDBHJDpHs7lP1jVZi9HpqOD4ghE%3D&reserved=0> . The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions. Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws. Notably, LinkedIn's recent move isn't an isolated case but is part of a broader trend where tech giants exploit user data to fuel AI advancements. Only recently, Meta allegedly confessed to using all public text and photos of adult Facebook and Instagram users to train its AI models since 2007 <https://www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data> . Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority. *How to Opt-Out of Your Account Being Used for Training Generative AI * 1. While logged into your LinkedIn account, go to *Settings & Privacy*. 2. Click on *Data Privacy*. 3. Select *Data for Generative AI Improvement* and turn off the feature. 4. To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn <https://www.linkedin.com/help/linkedin/ask/TS-DPRO>. Best, *Jacinta Wothaya,* *Digital Resilience Fellow @**KICTANet* <https://www.kictanet.or.ke/>, @*tatua <https://tatua.digital/>* LinkedIn: *Jacinta Wothaya <http://www.linkedin.com/in/jacinta-wothaya-510a8b153>*
Jacinta, This is useful, asante. On Thu, Sep 19, 2024 at 2:47 PM Jacinta Wothaya via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
Dear listers,
LinkedIn has stirred up controversy by introducing a feature allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models <https://www.linkedin.com/help/linkedin/answer/a6278444>. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency. *Many users were automatically opted in to this feature without prior notification*, igniting fears over data misuse. The company has just updated the privacy policy on its website <https://www.linkedin.com/legal/privacy-policy#use> to reflect the new changes, effective September 18, 2024.
According to LinkedIn's FAQs <https://www.linkedin.com/help/linkedin/answer/a5538339>, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate opt-out form provided by LinkedIn <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638563093009141369%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SrajaO%2FtZlu4tb43TDBHJDpHs7lP1jVZi9HpqOD4ghE%3D&reserved=0> .
The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions. Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws.
Notably, LinkedIn's recent move isn't an isolated case but is part of a broader trend where tech giants exploit user data to fuel AI advancements. Only recently, Meta allegedly confessed to using all public text and photos of adult Facebook and Instagram users to train its AI models since 2007 <https://www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data> .
Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority.
*How to Opt-Out of Your Account Being Used for Training Generative AI *
1. While logged into your LinkedIn account, go to *Settings & Privacy*. 2. Click on *Data Privacy*. 3. Select *Data for Generative AI Improvement* and turn off the feature. 4. To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn <https://www.linkedin.com/help/linkedin/ask/TS-DPRO>.
Best,
*Jacinta Wothaya,* *Digital Resilience Fellow @**KICTANet* <https://www.kictanet.or.ke/>, @*tatua <https://tatua.digital/>* LinkedIn: *Jacinta Wothaya <http://www.linkedin.com/in/jacinta-wothaya-510a8b153>*
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
-- --- *Cephas O.M *
The double-edged sword here is this: Most data used to train AI models are Western Data, which creates the biases we see in AI. So, do we continue to be excluded from AI models? Regards *Ali Hussein* Fintech | Digital Transformation Tel: +254 713 601113 Twitter: @AliHKassim LinkedIn: Ali's Profile <http://ke.linkedin.com/in/alihkassim> <http://ke.linkedin.com/in/alihkassim> Any information of a personal nature expressed in this email are purely mine and do not necessarily reflect the official positions of the organizations that I work with. On Thu, Sep 19, 2024 at 2:48 PM Jacinta Wothaya via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
Dear listers,
LinkedIn has stirred up controversy by introducing a feature allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models <https://www.linkedin.com/help/linkedin/answer/a6278444>. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency. *Many users were automatically opted in to this feature without prior notification*, igniting fears over data misuse. The company has just updated the privacy policy on its website <https://www.linkedin.com/legal/privacy-policy#use> to reflect the new changes, effective September 18, 2024.
According to LinkedIn's FAQs <https://www.linkedin.com/help/linkedin/answer/a5538339>, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate opt-out form provided by LinkedIn <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638563093009141369%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SrajaO%2FtZlu4tb43TDBHJDpHs7lP1jVZi9HpqOD4ghE%3D&reserved=0> .
The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions. Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws.
Notably, LinkedIn's recent move isn't an isolated case but is part of a broader trend where tech giants exploit user data to fuel AI advancements. Only recently, Meta allegedly confessed to using all public text and photos of adult Facebook and Instagram users to train its AI models since 2007 <https://www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data> .
Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority.
*How to Opt-Out of Your Account Being Used for Training Generative AI *
1. While logged into your LinkedIn account, go to *Settings & Privacy*. 2. Click on *Data Privacy*. 3. Select *Data for Generative AI Improvement* and turn off the feature. 4. To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn <https://www.linkedin.com/help/linkedin/ask/TS-DPRO>.
Best,
*Jacinta Wothaya,* *Digital Resilience Fellow @**KICTANet* <https://www.kictanet.or.ke/>, @*tatua <https://tatua.digital/>* LinkedIn: *Jacinta Wothaya <http://www.linkedin.com/in/jacinta-wothaya-510a8b153>*
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
On Thu, 19 Sept 2024 at 12:52, Ali Hussein via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
The double-edged sword here is this: Most data used to train AI models are Western Data, which creates the biases we see in AI. So, do we continue to be excluded from AI models?
Another reality is that AI platforms are billions of dollars industry. Do they continue to profit from our data for free? Is there a win-win situation where it's not just extraction but shared prosperity?
Regards
*Ali Hussein*
Fintech | Digital Transformation
Tel: +254 713 601113
Twitter: @AliHKassim
LinkedIn: Ali's Profile <http://ke.linkedin.com/in/alihkassim> <http://ke.linkedin.com/in/alihkassim>
Any information of a personal nature expressed in this email are purely mine and do not necessarily reflect the official positions of the organizations that I work with.
On Thu, Sep 19, 2024 at 2:48 PM Jacinta Wothaya via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
Dear listers,
LinkedIn has stirred up controversy by introducing a feature allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models <https://www.linkedin.com/help/linkedin/answer/a6278444>. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency. *Many users were automatically opted in to this feature without prior notification*, igniting fears over data misuse. The company has just updated the privacy policy on its website <https://www.linkedin.com/legal/privacy-policy#use> to reflect the new changes, effective September 18, 2024.
According to LinkedIn's FAQs <https://www.linkedin.com/help/linkedin/answer/a5538339>, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate opt-out form provided by LinkedIn <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638563093009141369%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SrajaO%2FtZlu4tb43TDBHJDpHs7lP1jVZi9HpqOD4ghE%3D&reserved=0> .
The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions. Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws.
Notably, LinkedIn's recent move isn't an isolated case but is part of a broader trend where tech giants exploit user data to fuel AI advancements. Only recently, Meta allegedly confessed to using all public text and photos of adult Facebook and Instagram users to train its AI models since 2007 <https://www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data> .
Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority.
*How to Opt-Out of Your Account Being Used for Training Generative AI *
1. While logged into your LinkedIn account, go to *Settings & Privacy* . 2. Click on *Data Privacy*. 3. Select *Data for Generative AI Improvement* and turn off the feature. 4. To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn <https://www.linkedin.com/help/linkedin/ask/TS-DPRO>.
Best,
*Jacinta Wothaya,* *Digital Resilience Fellow @**KICTANet* <https://www.kictanet.or.ke/>, @*tatua <https://tatua.digital/>* LinkedIn: *Jacinta Wothaya <http://www.linkedin.com/in/jacinta-wothaya-510a8b153>*
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
Best Regards, ______________________ Mwendwa Kivuva, Nairobi, Kenya https://www.linkedin.com/in/mwendwa-kivuva
Thanks Jacinta for your helpful advise. On Thu, 19 Sept 2024 at 22:30, Mwendwa Kivuva via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
On Thu, 19 Sept 2024 at 12:52, Ali Hussein via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
The double-edged sword here is this: Most data used to train AI models are Western Data, which creates the biases we see in AI. So, do we continue to be excluded from AI models?
Another reality is that AI platforms are billions of dollars industry. Do they continue to profit from our data for free? Is there a win-win situation where it's not just extraction but shared prosperity?
Regards
*Ali Hussein*
Fintech | Digital Transformation
Tel: +254 713 601113
Twitter: @AliHKassim
LinkedIn: Ali's Profile <http://ke.linkedin.com/in/alihkassim> <http://ke.linkedin.com/in/alihkassim>
Any information of a personal nature expressed in this email are purely mine and do not necessarily reflect the official positions of the organizations that I work with.
On Thu, Sep 19, 2024 at 2:48 PM Jacinta Wothaya via KICTANet < kictanet@lists.kictanet.or.ke> wrote:
Dear listers,
LinkedIn has stirred up controversy by introducing a feature allowing the platform and its affiliates to use personal data and user-generated content to train generative AI models <https://www.linkedin.com/help/linkedin/answer/a6278444>. While this move reflects the growing trend of data commodification in the age of artificial intelligence, it raises serious concerns regarding user consent and privacy. The new feature allows LinkedIn to leverage the vast amount of data generated by its users to enhance its AI capabilities. This decision is not unexpected; as AI technology becomes more sophisticated, data is increasingly recognized as a valuable asset. However, LinkedIn’s implementation has come under fire for its lack of transparency. *Many users were automatically opted in to this feature without prior notification*, igniting fears over data misuse. The company has just updated the privacy policy on its website <https://www.linkedin.com/legal/privacy-policy#use> to reflect the new changes, effective September 18, 2024.
According to LinkedIn's FAQs <https://www.linkedin.com/help/linkedin/answer/a5538339>, opting out means that the platform and its affiliates won’t use your personal data or content to train models going forward. However, this does not affect any training that has already taken place. Furthermore, opting out does not prevent LinkedIn from using your personal data for training non-content-generating generative AI models. Users must object to this latter use by filling out a separate opt-out form provided by LinkedIn <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638563093009141369%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SrajaO%2FtZlu4tb43TDBHJDpHs7lP1jVZi9HpqOD4ghE%3D&reserved=0> .
The move appears to contravene several important regulations designed to protect user privacy. Under the General Data Protection Regulation (GDPR) in the EU, Article 6 stipulates that personal data must be processed lawfully, fairly, and transparently. LinkedIn’s failure to notify users may violate these principles, particularly the requirement for informed consent. Furthermore, Article 7 mandates that consent must be freely given and can be withdrawn at any time. LinkedIn's FAQ for its AI training claims that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets. Notably, the platform states it does not train its models on users located in the EU, EEA, or Switzerland, which may provide some level of assurance for users in those regions. Similarly, the Kenya Data Protection Act (2019) emphasizes the importance of consent. Section 26 of this act requires data controllers to obtain explicit consent from users before processing their personal data. By automatically opting users in, LinkedIn could be infringing upon these legal protections, raising significant questions about its compliance with data protection laws.
Notably, LinkedIn's recent move isn't an isolated case but is part of a broader trend where tech giants exploit user data to fuel AI advancements. Only recently, Meta allegedly confessed to using all public text and photos of adult Facebook and Instagram users to train its AI models since 2007 <https://www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data> .
Such practices raise important questions about user rights, data ownership, and ethical considerations in AI development. While the potential for innovation is significant, the risks associated with unauthorized data use cannot be overlooked. Tech giants will continue to push the boundaries of data utilization, and we are likely to see increasing scrutiny from governments and regulatory bodies worldwide. Nonetheless, existing laws may not be sufficient to address the complexities introduced by AI and big data, and the need for robust legislation to increase transparency, consent, and accountability in data usage has never been more pressing. At the moment, it is the user's responsibility to stay informed and proactive about their data privacy but we look forward to a time when all tech companies innovate with user protection as the priority.
*How to Opt-Out of Your Account Being Used for Training Generative AI *
1. While logged into your LinkedIn account, go to *Settings & Privacy*. 2. Click on *Data Privacy*. 3. Select *Data for Generative AI Improvement* and turn off the feature. 4. To stop your data from being used for non-content-generating AI models, complete the following form provided by LinkedIn <https://www.linkedin.com/help/linkedin/ask/TS-DPRO>.
Best,
*Jacinta Wothaya,* *Digital Resilience Fellow @**KICTANet* <https://www.kictanet.or.ke/>, @*tatua <https://tatua.digital/>* LinkedIn: *Jacinta Wothaya <http://www.linkedin.com/in/jacinta-wothaya-510a8b153>*
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
_______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
Best Regards, ______________________ Mwendwa Kivuva, Nairobi, Kenya https://www.linkedin.com/in/mwendwa-kivuva _______________________________________________ KICTANet mailing list -- kictanet@lists.kictanet.or.ke To unsubscribe send an email to kictanet-leave@lists.kictanet.or.ke Unsubscribe or change your options at: https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
Mailing List Posts Online: https://posts.kictanet.or.ke/
Twitter: https://twitter.com/KICTANet/ Facebook: https://www.facebook.com/KICTANet/ Instagram: https://www.instagram.com/KICTANet/ LinkedIn: https://www.linkedin.com/company/kictanet/ YouTube: https://www.youtube.com/channel/UCbcLVjnPtTGBEeYLGUb2Yow/ WhatsApp Channel: https://whatsapp.com/channel/0029VaQsX4w6mYPIctLsGh1K
KICTANet is a multi-stakeholder Think Tank for people and institutions interested and involved in ICT policy and regulation. KICTANet is a catalyst for reform in the Information and Communication Technology sector. Its work is guided by four pillars of Policy Advocacy, Capacity Building, Research, and Stakeholder Engagement.
KICTANetiquette : Adhere to the same standards of acceptable behaviors online that you follow in real life: respect people's times and bandwidth, share knowledge, don't flame or abuse or personalize, respect privacy, do not spam, do not market your wares or qualifications.
PRIVACY POLICY: See https://mm3-lists.kictanet.or.ke/mm/lists/kictanet.lists.kictanet.or.ke/
KICTANet - The Power of Communities, is Kenya's premier ICT policy engagement platform.
-- *Elizabeth W Kiarie* Strategic Management and Communication Specialist Tel:+254722737330 Email: lizkiarie@gmail.com LinkedIn: www.linkedin.com/in/elizabethkiarie
participants (5)
-
Ali Hussein
-
Cephas Joseph
-
Jacinta Wothaya
-
Liz
-
Mwendwa Kivuva