Starting in early March 2026, Discord will downgrade every user account on the platform to a "teen-by-default" experience. Your content filters get locked on. Your DMs get restricted. You lose access to age-gated servers and channels. You can't speak on stage channels. To get your account back to normal, you have two options: let Discord scan your face via a video selfie, or hand over your government-issued ID.
Discord is framing this as a teen safety initiative. And on the surface, protecting young people online sounds like something no reasonable person could argue against. But when you look at the timing, the track record, and what's actually being asked of users, this isn't a safety story. It's a privacy story, and not a good one.
"Trust Us" Is Not a Security Model
Discord's privacy assurances for this rollout sound carefully crafted. Video selfies for facial age estimation are processed on-device and never leave your phone. Identity documents submitted to vendor partners are deleted quickly, in most cases, immediately after age confirmation. You only need to verify once.
These are claims, not guarantees. And with a closed-source platform, you have no way to verify any of them. You can't audit the code. You can't inspect the data pipeline. You can't confirm that your video selfie truly stays local, or that your passport scan was actually deleted. You're being asked to take Discord's word for it.
Four months ago, Discord gave us a very clear demonstration of what their word is worth.
October 2025: 70,000 Government IDs Leaked
In October 2025, Discord disclosed that a third-party customer service provider, later identified as 5CA, a Netherlands-based outsourcing firm, had been breached. A hacking group calling itself Scattered Lapsus$ Hunters gained access to the vendor's systems and spent roughly 58 hours extracting data.
The damage was significant. Discord confirmed that approximately 70,000 users had their government-issued ID photos exposed, passports, driving licences, the exact documents people had submitted to prove their age. On top of that, names, email addresses, Discord usernames, support chat transcripts, IP addresses, and partial billing details were all compromised. The attackers claimed to have stolen 1.5 terabytes of data affecting 5.5 million users. Discord disputed those numbers, but the 70,000 confirmed ID leaks alone represent a serious breach of the most sensitive personal data a platform can hold.
Then things got murkier. 5CA denied that its systems were involved and claimed it had never handled government-issued IDs for Discord. Discord said the breach came from 5CA. Both companies pointing fingers at each other while 70,000 people's identity documents circulated in hacker channels.
And this wasn't even Discord's first third-party breach. In March 2023, a support agent's account was compromised, exposing email addresses, messages, and attachments from support tickets.
The pattern is clear: Discord collects sensitive data, hands it to third parties, and those third parties get breached. Now they want to scale that data collection to every single user on the platform.
The Age Verification Trap
Discord isn't doing this in a vacuum. The UK's Online Safety Act, Australia's age verification laws, and growing political pressure worldwide are pushing platforms to verify user ages. The intention, keeping children safe online, is something most people agree with. The implementation is where it falls apart.
Age verification at scale requires collecting exactly the kind of data that attackers find most valuable: government IDs, biometric data, proof-of-identity documents. Every platform that implements this creates a new honeypot. You're not reducing risk to children, you're creating risk for everyone.
The Electronic Frontier Foundation has consistently warned that users who submit identifying information online can never be sure how that data will be stored, shared, or protected. The Discord breach proved them right in the most concrete way possible.
Cybersecurity experts were equally blunt. One consultant described the Discord breach as "the first major test of the UK's age verification system", and it failed. Another pointed out that despite age verification being outsourced, businesses still bear accountability for ensuring that data is stored appropriately. Discord clearly didn't meet that standard.
There's also the inconvenient reality that these systems don't actually work as intended. In 2025, users discovered that Death Stranding 2's photo mode could be used to bypass Discord's "robust" facial age estimation. Meanwhile, data from the UK showed that age verification requirements were driving traffic away from compliant sites and toward non-compliant ones, the exact opposite of the intended effect. The people these laws are meant to protect simply find ways around them, while everyone else is left handing over their biometric data to companies with poor security track records.
What's Really Driving This
Discord is reportedly preparing to go public. An IPO-ready company needs to demonstrate regulatory compliance, particularly around child safety, one of the most politically charged issues in tech right now. Implementing global age verification checks a major box for regulators and investors alike.
Discord's head of product policy acknowledged to The Verge that the company expects "some sort of hit" to its traffic from this change, but said they'd find other ways to bring users back. That's a telling statement. This is a calculated business decision where the regulatory upside outweighs the user experience cost.
None of this is to say that child safety doesn't matter, it absolutely does. But there's a difference between genuinely protecting young people and implementing mass biometric surveillance to satisfy regulatory requirements ahead of a public offering. The framing is child safety. The function is compliance theatre backed by data collection at a scale Discord has already proven it cannot secure.
The Alternative: Transparency Over Trust
The fundamental problem with Discord, and every closed-source platform collecting biometric data, is that the entire model is built on trust. Trust that the code does what they say. Trust that the vendors are secure. Trust that the data gets deleted. Trust that the next breach won't happen.
Open-source software operates on the opposite principle: don't trust, verify.
When the source code is public, anyone can audit it. Security researchers, independent developers, your own team. Claims about on-device processing or data deletion aren't marketing copy, they're verifiable facts in a codebase. When vulnerabilities are found, they're disclosed publicly and fixed transparently, not buried in an internal incident report that surfaces months later.
For communication platforms specifically, open-source alternatives have matured significantly:
Element (Matrix protocol) is the closest equivalent to what Discord offers, text channels, voice, video, communities, but built on an open, federated protocol. Messages are end-to-end encrypted by default. You can self-host the entire stack, meaning your data never touches a third party's infrastructure. The supply chain attack that hit Discord literally cannot happen when you control the server. Federation means you're not locked into one provider, if your host fails, you migrate without losing your identity or contacts.
Rocket.Chat offers a similar self-hosted model with a more business-oriented feature set. It's designed for organisations that need compliance-friendly communications without handing data to a SaaS provider.
Revolt is an open-source platform that deliberately mirrors Discord's interface, making the transition easier for communities used to Discord's UX. It's self-hostable and still growing, but it demonstrates that the Discord experience doesn't require Discord's data practices.
For voice-specific needs, Mumble has been the gold standard for years, open-source, self-hosted, minimal data collection, and battle-tested by gaming communities long before Discord existed.
The trade-offs are real. Self-hosting requires technical knowledge and infrastructure. The UX gap between these tools and Discord exists, though it's closing rapidly. Migration takes effort, especially for large communities. But the question isn't whether these alternatives are perfect, it's whether the trade-off between convenience and privacy is one you're willing to keep making as platforms demand more and more of your personal data.
This same principle extends beyond chat platforms. GrapheneOS proves you can have a fully functional smartphone without Google's surveillance layer. Linux proves you can have a productive desktop without Microsoft's telemetry. The pattern is consistent: wherever there's a closed-source platform harvesting your data, there's usually an open-source alternative that doesn't. The tools exist. The question is whether we care enough to use them.
What This Means for You
If you're a Discord user facing this change in March, here's where things stand:
If you decide to verify, the face scan option is marginally better than submitting your ID, Discord claims on-device processing, and even if you can't verify that claim, it's one fewer copy of your passport floating around a vendor's systems. Avoid submitting government documents if you can.
If you decide not to verify, your Discord experience gets significantly restricted but the account still functions. You won't lose your servers or contacts, but you'll be locked out of age-gated content, content filter controls, and some communication features.
Either way, be prepared for phishing. Discord has already warned that they will never contact users about verification via phone, email, or text, only through an in-app DM from their official system account. Every other communication claiming to be about age verification is a scam, and there will be a lot of them.
Longer term, this is a good moment to evaluate which platforms genuinely deserve your trust and your data, and which ones have simply made it convenient enough that you haven't questioned it.
The Bigger Picture
Discord's face scan mandate is not an isolated event. It's part of a pattern: governments pass age verification laws, platforms scramble to comply by collecting increasingly sensitive data, that data gets breached, and the cycle repeats. Roblox is implementing facial age checks. OpenAI is adding age prediction to ChatGPT. The infrastructure of biometric surveillance is being built across every major platform, all under the banner of protecting children.
The intent may be genuine. The execution is creating a world where using the internet requires handing over your face, your ID, or both, to companies that have repeatedly demonstrated they cannot keep that data safe.
Privacy isn't a feature. It's not a setting you toggle on. It's a fundamental right, and it's being eroded one "safety initiative" at a time. The tools to resist this exist. Open-source, self-hosted, privacy-respecting alternatives are available right now, for nearly every use case. They may require more effort to set up and maintain, but the alternative is continuing to hand your most sensitive data to companies that treat it as a compliance checkbox.
Discord wants your face scan. Four months ago, they couldn't even keep your passport photo safe. That should tell you everything you need to know.



