Ever fancied showing your passport to a website just to look at a meme of Michelangelo’s David?
Welcome to 2025, where X (formerly Twitter, now a karaoke bar with extra paperwork) has decided that the path to safety is paved with selfies, ID scans, and a sprinkling of GDPR fairy dust.
The pitch is noble: protect the children.
The execution? About as elegant as asking a nightclub bouncer to guard a library.
Data is “deleted in 30 days” (or whenever the next audit finishes, whichever comes later), regulators wag fingers about “purpose limitation,” and VPN providers rub their hands like Victorian shopkeepers on Christmas Eve.
In my latest piece, I dig into the comedy of errors that is age verification on X: the legal contortions, the privacy risks, the suspicions that your shiny new selfie might also help advertisers decide whether you’d prefer baldness cures or sports cars.
Spoiler: it’s not just Premium users anymore—everybody’s invited to the bureaucratic buffet.
Read it with a stiff cup of tea, a raised eyebrow, and the certain knowledge that liberty, privacy, and common sense are all queuing at the door, ID in hand.

Let’s set the scene. You’re scrolling X—formerly Twitter, now rebranded as a sort of karaoke bar of unfiltered outrage—when you hit a content warning. “This media may contain sensitive content. To proceed, please verify your age.” Fine, you think. Surely it’ll be a quick tick-box: “Yes, I solemnly swear I’m over 18.” But no, X has grander ambitions. Out comes the request: “Upload a selfie. Upload your ID. Trust us, it’s only for safety.”
Welcome to 2025, where viewing memes requires the same bureaucracy as renewing your driving licence.
A Short History of Trying (and Failing) to Age-Proof the Internet
This is not our first rodeo. Back in 2019, the UK government attempted the infamous “porn block”—a plan requiring credit card details or passport checks to access adult sites. It was hailed as a world first. It collapsed in embarrassment before it even launched, plagued by privacy concerns, technical flaws, and the dawning realisation that teenagers with VPNs are more resourceful than politicians with white papers.
Fast forward to today, and the Online Safety Act 2023 has revived the dream. Ofcom now holds the power to demand robust age checks from platforms hosting “primary priority” content—pornography at the top of the list. Cue a wave of companies scrambling to bolt ID-check widgets onto their services before regulators arrive with their £18 million hammer.
The EU, too, has long flirted with the idea of age gates. The Audiovisual Media Services Directive of 2010 included vague nudges about protecting minors, while the newer Digital Services Act pushes platforms to create safer online environments. What it lacks in clarity, it makes up for in bureaucratic charm.
And then there’s GDPR—the General Data Protection Regulation, the EU’s privacy gospel since 2018. It insists on purpose limitation, minimisation, transparency, and accountability. In theory, that means if a platform says your passport scan is only to prove you’re over 18, it must not also use it to sell you targeted ads for hair-loss treatments. In theory.
What X Says (and Doesn’t Say)
According to X’s Help Center, the platform may use several “signals” to determine your age: declared birth date, account history, phone number, email domain, even your social graph. But when “necessary,” it reserves the right to demand biometric selfies or government IDs.
The good news? The company says IDs and selfies are deleted within 30 days, leaving only your confirmed date of birth. The bad news? We’ve heard this tune before. “Deleted” often means “moved to a colder server, pending audit.” Logs exist. Metadata lingers. And GDPR’s accountability principle requires some trace of your verification to survive in case a regulator demands proof. Schrödinger’s Data, both gone and retained.
More eyebrow-raising is the blurry line between verification and profiling. A Reddit thread claims X also uses age information to refine ads and recommendations. If true, this would cross into the realm of secondary use—an extra processing purpose under GDPR, requiring fresh transparency and possibly fresh consent. Nothing in X’s help pages states this openly. If the suspicion is correct, we’ve drifted from “protecting children” into “optimising ad bubbles.”
Sensitive Content: A Moving Target
X classifies “sensitive content” broadly: adult nudity, sexual behaviour, graphic violence, and anything the platform deems shocking. Accounts posting such material must flag themselves as “sensitive,” and users must tick a box in settings to view it. Without verification, posts are blurred, labelled, or hidden entirely (X rules).
But the process is opaque. What constitutes “sensitive”? Is Michelangelo’s David sensitive? What about breastfeeding images? Or political satire involving nudity? Moderation teams vary, appeals are inconsistent, and cultural biases creep in. Under the EU’s Digital Services Act, platforms must explain their moderation criteria and offer transparent redress. But explanations on X remain thin—if you’ve ever filed an appeal, you’ll know the reply resembles an astrological horoscope more than a legal document.
The User Experience: Kafka with a Webcam
Imagine telling your grandparents: “To see a tweet, you need to show your passport to your phone.” The absurdity is the point. Platforms that once thrived on anonymity—think Twitter in its early days of pseudonyms and parody—are now demanding real-world IDs. The Wild West has been gentrified, with bureaucrats at the saloon door.
For many, the reaction is predictable: VPN use is exploding. Following the Online Safety Act’s rollout, VPN providers reported traffic spikes of 500% in the UK (Times of India). Citizens aren’t suddenly libertines; they just don’t fancy handing over passports for memes.
GDPR: The Killjoy in the Room
Let’s walk through the GDPR obligations X ought to meet:
- Lawful basis: The platform must justify why it needs your ID. Protecting minors may count as “legal obligation” or “legitimate interest.” But neither automatically permits using the same data for advertising.
- Purpose limitation: If it’s for age verification, it stays for age verification. Using it for ad profiling would require separate consent.
- Data minimisation: If a birth date suffices, why demand a full passport? Over-collection is unlawful.
- Special categories: Biometric data (selfie scans) counts as sensitive. Processing requires explicit consent and high safeguards.
- Retention: Delete when done, keep no more than necessary. “30 days” must mean actually 30 days.
- Transparency: Policies must be clear, accessible, and unambiguous. Hiding ad-profiling behind vague language doesn’t cut it.
The European Data Protection Board recently warned that most age-assurance tools fail these standards. That’s not a minor quibble; it’s a flashing neon sign saying “see you in court.”
Who Guards the Guardians?
Enforcement is a tug-of-war.
- In the UK, Ofcom can fine platforms up to £18 million or 10% of global revenue for failures under the Online Safety Act. That’s a sum big enough to make Elon Musk’s eyebrows twitch.
- In the EU, national data protection authorities enforce GDPR, backed by the EDPB. Complaints can take years, but fines can reach 4% of global turnover. Just ask Meta, which has racked up billions in penalties.
- The Digital Services Act adds transparency demands, requiring large platforms (like X) to disclose moderation policies, risk assessments, and data-access processes.
In practice? Regulators move slower than users with VPNs. By the time Ofcom issues its first fine, half the UK will have learned how to route their traffic through Iceland.
Data Breaches: A Matter of “When,” Not “If”
History offers plenty of horror stories:
- In 2019, the UK’s Home Office lost thousands of biometric records during a database error.
- In 2020, Clearview AI’s client list of police agencies leaked.
- In 2021, ID.me, the US government contractor for unemployment checks, faced scrutiny for storing selfies and biometric data longer than disclosed.
If government contractors can’t keep ID data safe, do we really believe X will? One sloppy misconfiguration, one rogue insider, and your passport scan is up for sale alongside your email in a darknet bazaar.
Liberty vs. Security: The Old Dance
The rhetoric is noble: “We must protect the children.” Who disagrees with that? But the execution erodes privacy for all, while effectiveness remains doubtful. Teenagers are resourceful; adults are weary. The system creates inconvenience for the law-abiding, and opportunities for criminals.
As Churchill might paraphrase for 2025: “They could choose between liberty and security. They chose security, gained neither, and lost the liberty.”
The Advertising Bubble Question
Let’s circle back to the suspicion: are these selfies and IDs really just for age checks? Or do they conveniently enrich X’s advertising data?
If my ID says I’m 45, will I start seeing ads for baldness cures, midlife crisis convertibles, and energy supplements? If my passport is French, will I be fed French political propaganda? Under GDPR, if age or nationality data derived from ID verification influences ad targeting, X must clearly declare it. Spoiler: it doesn’t.
And who checks? The ICO in the UK, CNIL in France, the Irish DPC (where X is EU-registered). These agencies have the power to demand audits. Whether they exercise it is another matter.
Sensitive Content: A Convenient Vague Zone
By keeping “sensitive content” definitions broad, X reserves discretion. Today it’s porn; tomorrow it could be politically awkward memes. The EU’s DSA demands proportionate, transparent content moderation, but vague “sensitivity” rules risk becoming a censorship tool. Is age verification the thin end of the wedge?
Conclusion: Schrödinger’s Passport
So here we stand. To browse X freely, you must present ID like a nightclub bouncer demands at the door. Your passport is scanned, your selfie analysed, your data “deleted” (probably). Regulators nod sternly, VPN sales skyrocket, and users adapt. Safety is claimed, liberty is trimmed, security is compromised.
The bitter punchline: the very data collected to protect us may end up undermining our privacy, fuelling advertising bubbles, and leaking into criminal hands.
Civilisation, meet your age-checked internet. Don’t forget your passport at the door.
References
- X Help Center – Age Assurance
- X Media Policy (Sensitive Content)
- European Data Protection Board – Statement on Age Assurance Technologies
- UK Online Safety Act 2023 – Wikipedia
- TechRadar: Age Verification and Privacy
- Times of India – VPN Usage After Porn Rules
- PC Gamer – Age Verification Privacy Nightmare
- Wired – The Age-Checked Internet Has Arrived
Discover more from The Puchi Herald Magazine
Subscribe to get the latest posts sent to your email.

You must be logged in to post a comment.