
OpinionThis piece is a sequel to my earlier writing on Section 230. That article examined how legal immunity outlasts the architecture it was designed to protect; this one explores the deeper mistake that follows—treating safety and liberty as opposing forces when system design no longer requires surveillance or centralized control.
A Re-reading of Benjamin Franklin for the Digital Age
“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” — Benjamin Franklin, Pennsylvania Assembly Reply to the Governor (1755)
Few quotations are invoked more often—and more imprecisely—than this one. It is routinely used as a veto against safety measures in technology and public policy, as though Franklin were issuing a blanket prohibition on protection itself. But that reading collapses under historical scrutiny. Franklin was not arguing against safety. He was warning against concentrated power justified by fear. Understanding that distinction matters enormously as we confront modern questions about digital harm, privacy, and child protection.
Franklin’s 1755 statement arose during a dispute between the Pennsylvania Assembly and Governor Robert Morris over taxation to fund frontier defense. The Assembly resisted granting the Crown unchecked authority over funding and enforcement. Franklin’s concern was not public safety—it was who controlled the mechanisms of safety, and whether that control was temporary or permanent. As historian Gordon Wood has observed, Franklin and his contemporaries were deeply skeptical of security granted at the discretion of authority rather than secured through durable civic structures (Wood, The Radicalism of the American Revolution, 1992). In short: Franklin opposed safety that required surrendering agency.
Today, Franklin’s quote is often used to frame debates as a binary choice:
That framing assumes safety must come from surveillance, and control must come from a central authority. That assumption was largely true in Franklin’s era.I t is no longer technologically inevitable.
This is where the debate has quietly—but materially—shifted. Historically, safety mechanisms relied on observation, reporting, and enforcement after harm occurred. In the digital context, this translates into content monitoring, mass data collection, and retrospective intervention. Critics are right to worry about these tools. Surveillance systems expand power and demand extraordinary trust in institutions. Scholars from Michel Foucault to Shoshana Zuboff have warned that such systems rarely remain limited to their original purpose (Zuboff, The Age of Surveillance Capitalism, 2019). But rejecting surveillance does not require rejecting safety.
Franklin lived in a world where safety could only be imposed from above. We do not. Modern systems can be designed to:
This is not censorship. It is architecture. Legal scholars have long recognized this distinction. As Lawrence Lessig famously argued, “Code is law”—meaning system design shapes behavior as powerfully as statutes do (Lessig, Code and Other Laws of Cyberspace, 1999). Design choices can distribute responsibility rather than centralize power.
A fair counter-argument arises here:
Adults should be free to choose their level of risk.
That is reasonable—for adults. Children do not possess the same cognitive, emotional, or contextual discernment. Developmental psychology is unequivocal on this point (Steinberg, Age of Opportunity, 2014). Systems optimized for engagement, amplification, and personalization do not wait for maturity.
When harm occurs, responsibility is often diffused:
But architecture rarely stands trial.
Another argument deserves respect:
Platforms must innovate and remain profitable.
True. Profit is not a moral failing. But tort law has long recognized that once safer alternatives are technically feasible, continuing to deploy more dangerous designs becomes legally relevant. This principle runs throughout product-liability doctrine (Restatement (Third) of Torts: Products Liability §2). As Justice Learned Hand famously framed it, negligence turns on whether the burden of prevention is outweighed by the probability and gravity of harm (United States v. Carroll Towing Co., 159 F.2d 169 (2d Cir. 1947)). Once safer, privacy-preserving designs exist, the argument that harm was unavoidable collapses—ethically and legally.
Franklin warned against surrendering liberty to temporary safety enforced by authority. He did not warn against building safer systems. A modern restatement of his principle might read:
Liberty is best preserved when safety is embedded in systems themselves, rather than imposed through surveillance after harm occurs.
This approach respects free expression, minimizes data collection, and protects those least able to protect themselves—without expanding centralized power.
Perhaps the real question is not:
How much liberty should we give up for safety?
But rather:
Why do we continue to rely on surveillance and after-the-fact enforcement when safer design is demonstrably possible?
That is not an ideological question. It is a design question. Franklin would have recognized the difference. And I suspect he would have insisted we stop confusing the two.
Disclosure: I am a lawyer and a partner in ChildSafe.dev, a company working on privacy-first, safety-by-design technologies referenced here. The views expressed are my own. This article is provided for general informational and educational purposes only and does not constitute legal advice. Nothing herein creates an attorney–client relationship, nor should it be relied upon as a substitute for legal counsel regarding any specific situation.
© 2025 ChildSafe.dev · Carlo Peaas Inc. All rights reserved.
Built with privacy-first, PII-free child protection.