Shoutout to mycelias for helping make this post a LOT better.
Early crypto advocates, with what now appears to be touching naïveté, believed pseudonymity would suffice. The reasoning went: if your onchain address isn’t directly connected to your real world identity, then who could possibly connect them? Modern chain analysis tools “cut” through this thin veil with disturbing accuracy.
This total transparency isn’t a bug, it’s the feature that makes the chain work. Public verifiability enables trustless consensus without centralized authorities. Zero knowledge proofs were still curiosities rather than practical tools when Bitcoin came around. The innovation wasn’t digital money (we’d already had that for decades, it was practical). The innovation was practical digital money that doesn’t require trusted third parties.
But the very transparency needed for chains to work necessarily eviscerates financial privacy.
“Selective privacy” has emerged as the apparent sweet spot: transactions remain verifiable but not universally visible. Developers have created a variety of mechanisms, each purporting to find a workable equilibrium between privacy and transparency (and by extension, regulatory compliance).
Consider February’s zkLend exploit: An attacker stole 3,600 ETH, bridged them to Ethereum mainnet, and attempted to launder them through Railgun, a privacy protocol. But here’s where our story takes an unexpected turn: the funds were rejected. Railgun’s somewhat Orwellian-named “private proofs of innocence” system—a mechanism designed to block illicit funds from entering their system—identified the funds as suspicious and rejected them. The attacker could only withdraw tokens back to the original address, effectively trapped with their loot.
"This is a solid demonstration of Railgun’s privacy pools mechanism working in practice, allowing Railgun to avoid serving proceeds of crime without using any snooping/backdoors." VITALIK BUTERIN
Perhaps we’ve finally discovered a viable middle path, a technological solution to the seemingly irreconcilable demands of privacy and compliance in an ecosystem where privacy tools increasingly face regulatory scrutiny. The crackdown has been relentless—Tornado Cash and Bitcoin Fog developer arrests, the anti-encryption wave in EU and pressure on messaging apps like Signal and Telegram. (Ironically, Telegram isn’t private to begin with, which suggests the technical literacy informing these enforcement actions is primitive or nonexistent). The message is clear: implement backdoors for authorities or face consequences that range from criminal charges to sanctions to indefinite detention.
This pressure has catalyzed the development of “privacy pools” and similar mechanisms attempting the high-wire act of selective privacy—providing actual privacy only for legitimate users while (trying to) satisfy regulatory requirements.
But there’s a more fundamental question: are we solving the actual problem? Can technical mechanisms simultaneously provide meaningful privacy and satisfy regulatory requirements, or are we attempting to reconcile fundamentally incompatible objectives? No amount of cleverness can square a circle if the underlying objectives are in direct conflict.
In what follows, we wonder if privacy pools (even though objectively great on their own) might actually fix the regulatory compliance issue, or whether we are engaged in an elaborate exercise of self-deception, pretending that contradictory requirements can be simultaneously satisfied through ever more advanced mechanisms.
Tornado Cash and the sanctioning of privacy
On August 8, 2022, the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) sanctioned smart contracts. Specifically, they placed Tornado Cash on their Specially Designated Nationals (SDN) list, a designation typically reserved for terrorist groups and hostile nation-states. The code, which exists as a smart contract on a blockchain, cannot be modified, controlled, or shut down by anyone.
The response was swift and severe:
- Circle froze USDC in Tornado Cash-linked addresses.
- GitHub removed all repositories associated with the project.
- Tornado Cash core developers were arrested and held without trial for over a year. Roman Storm, one of the core devs, is facing up to forty-five years in federal prison.
A disturbing new precedent was set: writing privacy-preserving code that may be used by a criminal would now carry criminal penalties previously reserved for the criminals themselves.
Contrary to the regulatory narrative, Tornado Cash was explicitly designed with compliance in mind. It had a tool that allowed users to prove to third parties their funds didn’t come from any illicit sources. Tornado Cash also blocked sanctioned addresses from their frontend. It was still sanctioned. 1 (The U.S. government lifted tornado cash sanctions on March 21, 2025.)
The changing default: from privacy to transparency
Financial surveillance has increased massively in recent decades. Consider the trajectory: from the Bank Secrecy Act (which ironically mandated unprecedented financial disclosure) through FATF Recommendations 2 (Financial action task force (FATF) recommendations are a set of measures countries are expected to implement to tackle illicit financial flows.) to cross-border information sharing agreements. The default regulatory position has flipped from “financial activity is private unless there’s cause for suspicion” to “all financial activity must be visible to authorities by default, with privacy granted only as a carefully controlled exception at regulatory discretion.”
In this inverted framework, financial privacy itself has become suspicious behaviour that warrants additional scrutiny. The burden of proof has shifted from authorities needing to justify why they should access your information to you needing to justify why they shouldn’t. This shift happened gradually, each step seemingly reasonable in isolation, until we arrived at a surveillance apparatus that would have been unthinkable a generation ago.
This regulatory climate creates a collision between two incompatible worldviews:
- The privacy perspective: Privacy is a fundamental human right. Financial privacy is merely one expression of this broader right, not a special privilege to be granted by authorities. Systems should default to privacy, with selective disclosure occurring only when explicitly authorized by the user.
- The regulatory perspective: Complete financial visibility is necessary for preventing crimes ranging from tax evasion to terrorist financing. Systems should default to transparency, with privacy permitted only under tightly controlled circumstances that preserve unlimited regulatory visibility into both individual transactions and broader patterns of financial activity—which is to say, not privacy at all in any meaningful sense.
I find myself firmly in the first camp, 3 (A position you likely share if you’re reading this far. In case you don’t, I don’t have time to try to convince you, sorry. But consider reading Vitalik’s recent post on privacy.) though not without recognizing the legitimate challenges this position entails. In an ideal world we’d have one massive anonymity set comprising all users who wish to transact privately, built directly into a protocol. The problem, of course, is that we haven’t developed a compelling response to the concern that such systems could indeed be misused by criminals. Privacy advocacy loses force when it dismisses such concerns.
Selective privacy: A middle path?
Privacy pools attempt to thread this needle through what might be called selective or conditional privacy. They let users maintain privacy while demonstrating through cryptographic proofs, rather than direct disclosure, that their funds didn’t originate from illicit sources. People should be able reveal or withhold their information and choose who they associate with. Privacy pools enable this.
Privacy pools are an objectively useful tool solving a real problem in real time. They create a separating equilibrium between honest and dishonest users through selective privacy. Users can choose to prove their funds did not come from unlawful sources without revealing their entire transaction history.
While designed to address regulatory concerns with Tornado Cash, privacy pools face regulatory uncertainty themselves. We don’t know if regulators will consider this an acceptable meeting point. They might simply ban it regardless of the compliance features.
Crypto wars 1.0 and the comms backdoors discussions back in the 90s (and 2.0, occurring now) show that appeasement is not a viable strategy for legitimizing privacy technology. The regulatory appetite tends towards expansion and pays no attention to attempts at deference.
Dissociation from bad actors might not really prevent authoritarians from declaring your privacy protocol illegal if the thing they want to illegalize is the privacy itself. (I’m aware I might sound like I’m leaning too much into the worst case, but stranger things have happened, the current EU regulatory crackdown doesn’t leave much room for someone to be optimistic.) Even if privacy pools gain regulatory acceptance, authorities could simply demand increasingly granular exclusion criteria until the privacy benefits effectively evaporate. We would then find ourselves arguing for privacy on fundamental rights-based grounds again, maybe with more nuance around privacy and anonymity, but rights-based advocacy all the same.
One might just dig a deeper and deeper hole yet never unearth common ground. You can’t protocol design yourself into compliance with arbitrary regulation-by-enforcement, because there are no consistent rules to begin with. Precisely what we’re seeing with Tornado Cash. It had a compliance tool that allowed users to prove their funds were clean to a third party, like an exchange of law enforcement. It also blocked sanctioned addresses on the frontend. The DOJ still concluded the developers were liable for how Tornado Cash was being used by Lazarus. OFAC’s policy—created by administrative fiat, not by law—is that any withdrawal is illegal without their express permission and the disclosure of every transaction you’ve ever made. This wasn’t part of their original action, and published only after they were sued over it.
Privacy pools exist as credibly neutral protocols that enable users to self-select into privacy pools with provable properties. There are no admin keys, no backdoors, and no companies involved. This is huge. Even in the worst case, immutable smart contracts cannot be coerced into compliance. This immutability offers a solid guarantee against privacy revocation.
Privacy Pools
Privacy pools took a new approach: privacy for the nice guys. The idea had been floating around for a while, with Ameen Soleimani’s presentation and then v0 on Optimism, though the release of the paper is when it appeared on a lot of people’s radar. 4 (One must yearn for the distribution that having Vitalik as a coauthor brings.)
Privacy pools took a different approach than the previous ZK based approaches. Instead of proving that a user has not interacted with sanctioned entities, which is infeasible 5 (We cannot prove negative statements efficiently using ZK.) to do, they prove that a user is a member of a whitelist of legit users.
There are three main components of a privacy pool:
- A contract layer managing deposits, withdrawals, and state
- A zero-knowledge layer providing cryptographic proofs
- An Association Set Provider (ASP) layer maintaining “approved” transaction sets
The ASP component is evidently the most vulnerable part—responsible for maintaining lists of approved deposit labels—introducing a new trusted entity in an otherwise trustless system. The ASP determines who can be part of the pool and under what conditions. Privacy pools assume the existence of accurate, up-to-date blacklists of “bad” addresses. In practice, there’s an unavoidable time delay between illicit activity and blacklisting, creating an exploitable window for attackers. The curators of these lists might get compromised, censored, or simply have to obey local compliance requirements.
There is a solution to the ASP trust problem: anyone who disagrees with the whitelist criteria can create their own. The issue then becomes getting a large enough anonymity set for their privacy pool to provide meaningful protection.
Limitations
Time-based security challenges
In a reductive view, privacy pools have a probabilistic security model based on a race condition. They can only block addresses after they’ve been identified as malicious, creating a time-of-check/time-of-use problem. Detection mechanisms must identify malicious funds faster than attackers can deposit them—a contest that sophisticated adversaries will sometimes win. We need to reduce the typical delay between exploit and detection to be less than these systems’ waiting periods.
The zkLend case has been celebrated as a validation of the privacy pools model, but its success hinged entirely on detection occurring within the narrow pre-deposit window. The case might be an exception where detection outpaced exploitation, not the expected baseline. Had the funds not been flagged in time, they would have permanently entered the anonymity set with no mechanism for retroactive removal.
Different implementations handle this timing challenge differently. 0xbow can remove deposits from their whitelist after acceptance, returning funds to the depositing address. While this prevents permanent tainting of the privacy pool, it merely relocates rather than fully solves the timing problem.
Other limitations
Beyond timing issues, privacy pools face other challenges:
- The bootstrapping problem: Privacy pools cannot prevent malicious actors from creating new, clean addresses. Adversaries can create fresh addresses with no prior history. If a clean address uses a privacy pool to anonymously fund a new clean recipient address and then engages in malicious behaviour, the entire whitelist gets tainted.
- Jurisdictional conflicts: Different regulatory regimes have fundamentally incompatible requirements for financial privacy and compliance. A system designed to satisfy U.S. regulators may violate EU laws, and vice versa.
Privacy pools (hopefully?) give the regulators what they want (the ability to exclude known bad actors) while still preserving privacy for everyone else. The question isn’t whether this is a perfect solution but whether it represents a reasonable compromise that would allow privacy-preserving technology to exist and mature while addressing legitimate regulatory concerns. (Hint: It does.)
Endnote
Privacy pools represent our strongest attempt yet to square the circle—to create a system that provides both meaningful privacy and regulatory compliance. They offer a sweet spot that many can live with, even if reluctantly.
There’s something backwards about how systems that were meant to eliminate trusted third parties now have to prove their trustworthiness to those parties. Hopefully, we have a few technological breakthroughs that let us return to first principles: where privacy is the default, not a privilege to be earned, and where innocence is presumed and it is guilt that has to be proven.
Liked reading this? Consider subscribing to get notified every time I post.