
AI · Privacy · Regulation · Social Media
Major social media platforms, including LinkedIn, Meta (Facebook, Instagram), and X (formerly Twitter), have faced significant regulatory backlash in Europe over their practices of using user data to train generative AI models without explicit consent.
LinkedIn initially began training its AI on user data before updating terms, excluding EU, EEA, and Switzerland, with the UK later added to the exclusion list following concerns from the Information Commissioner's Officer (ICO). This mirrors earlier incidents where Meta and X attempted similar data scraping across Europe but were forced to halt plans after strong opposition from EU privacy institutions and advocacy groups like Noyb.
Regulators, including the Irish Data Protection Authority, intervened, citing GDPR violations. While tech companies lament these decisions as hindering innovation, privacy experts commend Europe's proactive stance, signaling a firm commitment to its robust privacy framework.
The recurring pattern highlights a growing challenge for tech giants: navigating stringent global privacy regulations while developing data-intensive AI, often leading to fragmented product launches and increased compliance burdens. Users outside these protected regions often face an "opt-out" default, raising ethical concerns about transparency and user choice.