Liberty, Safety, and the Mistake We Keep Making
Opinion

Liberty, Safety, and the Mistake We Keep Making

Jan 12, 2026
02:37 AM

Author’s Note


This piece is a sequel to my earlier writing on Section 230. That article examined how legal immunity outlasts the architecture it was designed to protect; this one explores the deeper mistake that follows—treating safety and liberty as opposing forces when system design no longer requires surveillance or centralized control.


A Re-reading of Benjamin Franklin for the Digital Age


“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” — Benjamin Franklin, Pennsylvania Assembly Reply to the Governor (1755)


Few quotations are invoked more often—and more imprecisely—than this one. It is routinely used as a veto against safety measures in technology and public policy, as though Franklin were issuing a blanket prohibition on protection itself. But that reading collapses under historical scrutiny. Franklin was not arguing against safety. He was warning against concentrated power justified by fear. Understanding that distinction matters enormously as we confront modern questions about digital harm, privacy, and child protection.


What Franklin Was Actually Arguing About


Franklin’s 1755 statement arose during a dispute between the Pennsylvania Assembly and Governor Robert Morris over taxation to fund frontier defense. The Assembly resisted granting the Crown unchecked authority over funding and enforcement. Franklin’s concern was not public safety—it was who controlled the mechanisms of safety, and whether that control was temporary or permanent. As historian Gordon Wood has observed, Franklin and his contemporaries were deeply skeptical of security granted at the discretion of authority rather than secured through durable civic structures (Wood, The Radicalism of the American Revolution, 1992). In short: Franklin opposed safety that required surrendering agency.


The Modern Misreading


Today, Franklin’s quote is often used to frame debates as a binary choice:


  1. Freedom or safety
  2. Privacy or protection
  3. Speech or child welfare


That framing assumes safety must come from surveillance, and control must come from a central authority. That assumption was largely true in Franklin’s era.I t is no longer technologically inevitable.



Surveillance vs. Safety by Design


This is where the debate has quietly—but materially—shifted. Historically, safety mechanisms relied on observation, reporting, and enforcement after harm occurred. In the digital context, this translates into content monitoring, mass data collection, and retrospective intervention. Critics are right to worry about these tools. Surveillance systems expand power and demand extraordinary trust in institutions. Scholars from Michel Foucault to Shoshana Zuboff have warned that such systems rarely remain limited to their original purpose (Zuboff, The Age of Surveillance Capitalism, 2019). But rejecting surveillance does not require rejecting safety.



The Alternative Franklin Never Had


Franklin lived in a world where safety could only be imposed from above. We do not. Modern systems can be designed to:


  1. Operate locally (on the device, not the network)
  2. Minimize or eliminate data collection
  3. Intervene early, before harm escalates
  4. Reduce the need for enforcement altogether


This is not censorship. It is architecture. Legal scholars have long recognized this distinction. As Lawrence Lessig famously argued, “Code is law”—meaning system design shapes behavior as powerfully as statutes do (Lessig, Code and Other Laws of Cyberspace, 1999). Design choices can distribute responsibility rather than centralize power.



The Hard Question: Children and Discernment


A fair counter-argument arises here:


Adults should be free to choose their level of risk.


That is reasonable—for adults. Children do not possess the same cognitive, emotional, or contextual discernment. Developmental psychology is unequivocal on this point (Steinberg, Age of Opportunity, 2014). Systems optimized for engagement, amplification, and personalization do not wait for maturity.


When harm occurs, responsibility is often diffused:


  1. Parents are blamed for insufficient supervision
  2. Schools for insufficient education
  3. Law enforcement for slow intervention


But architecture rarely stands trial.



Profit, Responsibility, and Legal Reality


Another argument deserves respect:


Platforms must innovate and remain profitable.


True. Profit is not a moral failing. But tort law has long recognized that once safer alternatives are technically feasible, continuing to deploy more dangerous designs becomes legally relevant. This principle runs throughout product-liability doctrine (Restatement (Third) of Torts: Products Liability §2). As Justice Learned Hand famously framed it, negligence turns on whether the burden of prevention is outweighed by the probability and gravity of harm (United States v. Carroll Towing Co., 159 F.2d 169 (2d Cir. 1947)). Once safer, privacy-preserving designs exist, the argument that harm was unavoidable collapses—ethically and legally.



A Franklinian Update


Franklin warned against surrendering liberty to temporary safety enforced by authority. He did not warn against building safer systems. A modern restatement of his principle might read:


Liberty is best preserved when safety is embedded in systems themselves, rather than imposed through surveillance after harm occurs.


This approach respects free expression, minimizes data collection, and protects those least able to protect themselves—without expanding centralized power.


The Question Worth Asking


Perhaps the real question is not:


How much liberty should we give up for safety?


But rather:

Why do we continue to rely on surveillance and after-the-fact enforcement when safer design is demonstrably possible?


That is not an ideological question. It is a design question. Franklin would have recognized the difference. And I suspect he would have insisted we stop confusing the two.



Disclosure & Disclaimer


Disclosure: I am a lawyer and a partner in ChildSafe.dev, a company working on privacy-first, safety-by-design technologies referenced here. The views expressed are my own. This article is provided for general informational and educational purposes only and does not constitute legal advice. Nothing herein creates an attorney–client relationship, nor should it be relied upon as a substitute for legal counsel regarding any specific situation.

Dr. Gosch Loy Ehlers III

Strategic Operations Leader Chief Operating Officer, The Proudfoot Group
Dr. Gosch Loy Ehlers III brings ChildSafe.dev's groundbreaking technology to the organizations that need it most. As Chief Operating Officer of the Proudfoot Group the commercial engine behind ChildSafe.dev and RoseShield Technology he transforms cutting edge child protection innovations into deployable solutions for government agencies, defense organizations, and enterprise clients worldwide. Drawing on three decades of military legal service and corporate leadership, Dr. Ehlers architects the operational frameworks, compliance structures, and scalability strategies that allow ChildSafe.dev to expand into highly regulated sectors. His expertise bridges the gap between innovative AI technology and the stringent requirements of federal, defense, and commercial markets ensuring ethical child safety solutions can reach every platform that serves young users.
Share with your community!

© 2025 ChildSafe.dev · Carlo Peaas Inc. All rights reserved.

Built with privacy-first, PII-free child protection.

Liberty, Safety, and the Mistake We Keep Making