By David Carrero
Originally written in Spanish and published on carrero.es, this version has been adapted for an international audience.
Despite the European Union’s reputation for having one of the world’s most robust privacy frameworks, European citizens remain largely defenseless against the systemic exploitation of their data by Big Tech. The latest move by Meta—the parent company of Facebook, Instagram, and WhatsApp—to use public user content for training its AI models without clear, active consent is only the most recent example of a broader, deeply rooted strategy: legally circumventing the GDPR while claiming full compliance.
GDPR: A Shield That Arrives Too Late
The General Data Protection Regulation (GDPR), which came into force in 2018, was meant to be a turning point in digital rights. It introduced groundbreaking concepts like explicit consent, the right to be forgotten, and data minimization. Yet, in practice, many users across Europe have seen little improvement in how their data is treated.
Why? Because tech giants have become experts at exploiting the gaps. They interpret the law narrowly, deploy manipulative interface design (known as dark patterns), and rely on the inertia of regulators to continue collecting and using data with minimal disruption to their operations.
Meta and the Manufactured Consent
Let’s take Meta’s recent approach to AI training as a case study.
Rather than asking users a clear, binary question—“Do you want us to use your public data to train our AI? Yes or No”—the company issued a vague email titled “How improvements to AI at Meta will affect the use of your information”. It claims it will use public posts and comments from users over 18 for AI model development, including Meta AI and its open-source models.
To opt out, users must follow an obscure and tedious process involving a hidden form, multiple steps, and email confirmation. Most users will never complete—or even find—this process.
This is not informed consent. It is a strategy of attrition, designed to make refusal technically possible but practically impossible. Meanwhile, Meta justifies it all under the nebulous legal basis of legitimate interest, a concept permitted by Article 6(1)(f) of the GDPR—provided it does not override fundamental rights and freedoms. But does harvesting vast troves of public user data without consent respect those freedoms?
A Widespread Pattern of Exploitation
Meta is not alone. Google, Amazon, Microsoft, TikTok, and X (formerly Twitter) all engage in similar tactics:
- Default settings that are invasive by design.
- Opt-outs hidden in labyrinthine menus.
- Privacy notices full of legalese.
- Consent requests engineered to induce fatigue or apathy.
Even when they are fined, enforcement often comes years too late. By the time regulators respond, the companies have already profited handsomely from the data—and the damage is done.
Time Favors the Offender
The digital world operates in real-time. Startups iterate weekly, algorithms evolve in days, and user data flows constantly. Meanwhile, privacy investigations can take two to five years to resolve.
This time lag benefits Big Tech. Even if they eventually pay a fine, it is usually a drop in the ocean compared to the revenue generated by questionable practices. Risk becomes just another cost of doing business.
It’s as if a thief could rob a bank, invest the money for five years, and then only repay the original amount—if caught. This is the reality of data privacy enforcement in Europe.
Consent in Name Only
The GDPR is built on the principle of freely given, informed, and specific consent. But what we often get instead are:
- Consent banners designed to confuse.
- Interfaces nudging users to accept everything by default.
- Platforms that obscure settings behind multiple clicks.
This isn’t real consent. It’s manufactured compliance—consent theatre—designed to tick legal boxes while ignoring the ethical implications.
What Can Be Done?
As users, our options are limited:
- We can file complaints with national regulators—though the process is slow and often ineffective.
- We can try to adjust our privacy settings—assuming we can find them.
- Or we can stop using these platforms—an increasingly unrealistic expectation in today’s connected world.
The solution lies in systemic reform, not individual action. We need:
- A revised GDPR that removes the “legitimate interest” loophole for data training.
- Faster, more aggressive enforcement by data protection authorities.
- Independent oversight of AI model training and data usage.
Transparency Now, Not Later
We also need tech companies to ask for clear consent in accessible language and offer simple opt-out mechanisms. If a user says “No,” that choice must be honored—immediately and without loopholes.
And even more importantly: we need auditable guarantees that the data of those who opt out is genuinely excluded from training datasets. Anything less is a betrayal of user trust.
Final Thoughts: The Future of Digital Rights Is Now
AI isn’t the problem. The problem is how we train it.
Companies like Meta are building AI models on years of user-generated content—content that was never meant to be used this way. And they’re doing it under the cover of legitimate interest, technicality, and regulatory delay.
Unless European institutions act boldly and swiftly, privacy will remain a promise deferred. And by the time enforcement catches up, the next generation of algorithms—more powerful, more personal, and more invasive—will already be in place.
The choice is clear: either we reclaim control of our digital identities now, or we accept that in the age of AI, silence is consent.