Why “Leo” Again: Papal Naming, Social Safety Nets, and Designing Protection in Times of Technological Change
Historical Interest

Why “Leo” Again: Papal Naming, Social Safety Nets, and Designing Protection in Times of Technological Change

Jan 4, 2026
10:26 PM


A question that led me down a rabbit hole


Over the holidays, I came across an older article suggesting that Pope Leo XIV may have chosen his papal name in part because of concerns about artificial intelligence and its societal impact. That idea lingered with me. It wasn’t framed as a definitive claim—more an observation—but it was enough to send me down a rabbit hole.

What began as casual curiosity turned into several days of reading: papal naming traditions, nineteenth-century social doctrine, late Roman history, and modern discussions about how societies respond when technology reshapes everyday life. This article is the result of that exploration—not an argument, and certainly not a declaration of authority, but an attempt to understand why the name Leo appears, again and again, at moments of profound change.


A brief disclaimer, up front


Before going further, it’s important to be clear about what this is—and what it is not. I am not a Catholic theologian, Church historian, or doctrinal expert. I do not claim to interpret Church teaching or speak on behalf of the Catholic tradition. What follows is based on publicly available statements, historical sources, and my own reading and interpretation, shaped by professional experience working with governance, technology, and high-trust systems. Any errors or misreadings are mine.


Why papal names matter


In the Catholic Church, a pope’s chosen name is rarely accidental. It is one of the first acts of a new pontificate and often signals continuity with a predecessor, an intended emphasis, or a particular historical moment the pope wishes to invoke. When Pope Leo XIV announced his name, many observers immediately began asking why Leo—and why now.


Leo XIV and the explicit reference to Leo XIII


Shortly after the conclave, Matteo Bruni, Director of the Holy See Press Office, explained in Vatican News that Pope Leo XIV’s choice was a deliberate reference to Pope Leo XIII, particularly to his 1891 encyclical Rerum Novarum. Bruni noted that the reference was meant to draw a parallel between the industrial revolution of the nineteenth century and today’s technological transformation, especially the effects of artificial intelligence on work and society.


That interpretation was reinforced by Leo XIV himself in his address to the College of Cardinals, where he explicitly invoked Rerum Novarum and described the present moment as one in which technological change is once again outpacing the social structures meant to protect people. The implication was not subtle: the Church, he suggested, faces a challenge similar in shape—if not in substance—to the one Leo XIII confronted more than a century ago.


Leo XIII and Rerum Novarum: protection in an altered world


When Leo XIII issued Rerum Novarum in 1891, he was writing into a world already transformed by industrialization. The encyclical itself returns repeatedly to the consequences of that shift: wage labor replacing traditional forms of work, unsafe conditions in factories, the weakening of family and community ties, and the growing imbalance of power between workers and owners. Leo XIII describes laborers as increasingly “isolated and helpless,” language that reads as an attempt to name a new kind of vulnerability produced by modern economic systems.


What stands out, at least in my reading, is how measured his response was. Rerum Novarum does not call for revolution or wholesale rejection of the economic order. Instead, it tries to draw boundaries around it. Leo XIII emphasizes the dignity of labor, the moral limits of markets, and the legitimacy of institutional safeguards when economic forces alone fail to protect the vulnerable.


In modern language, scholars often describe Rerum Novarum as an early contribution to what I now think of as social protections or safety nets. That terminology came later, of course. But the underlying idea is already present: when systems grow large and powerful enough to shape daily life, they also require built-in safeguards—ways of absorbing shock, limiting harm, and preserving dignity as scale and speed increase.


The older echo: Leo I and stability under stress


Leo XIII was not the first “Leo” to lead during upheaval. Leo I (“Leo the Great”), who reigned in the fifth century, guided the Church during the slow collapse of the Western Roman Empire. His pontificate unfolded amid political fragmentation, external threats, and doctrinal disputes. Historically, Leo I is remembered less for innovation than for consolidation—asserting moral authority where secular authority was weakening, and insisting on clarity and coherence at a time when institutions were under strain. Across centuries, the name Leo has become associated—intentionally or not—with a particular posture: leadership aimed at reinforcing foundational safeguards when underlying systems begin to fracture.


From social doctrine to system design: an interesting parallel


Again, just to be sure I get this point across, this is my illustrative opinion for what I thought would be an interesting topic to explore for a LinkedIn article and not any attempt at some sort of important doctrinal epiphany. That said, I think there is a structural parallel between Catholic social teaching and modern technology design that is worth noting.


Leo XIII did not focus on isolated abuses. He tried to articulate principles that would shape the system itself—norms and constraints meant to prevent harm rather than merely respond to it after the fact. Protection, in this view, was not an add-on. It was foundational.


In contemporary technology and governance conversations, that same instinct appears in ideas like safety by design, privacy by design, and governance by design. These approaches emphasize embedding safeguards directly into system architecture instead of relying solely on enforcement, audits, or remediation once harm has occurred. In high-risk environments—child protection, defense, critical infrastructure, and regulated enterprise—this often means minimizing data rather than accumulating it, favoring local or on-device decision-making over centralized surveillance, and enforcing policy continuously rather than episodically.


Modern examples of governance as architecture


Within this broader shift, platforms such as RoseShield™, Carlo™, and DiscernAI™ are best understood not as products, but as expressions of an architectural mindset. RoseShield™, developed within the ChildSafe.dev ecosystem, was originally designed to protect children by keeping safety decision-making on the device itself—reducing data exposure while enforcing safeguards in real time. Carlo approaches governance as a runtime function, focusing on continuous policy enforcement and auditability as systems operate. DiscernAI, in turn, concentrates on message integrity—helping organizations make sure that their brand and identities remain aligned with stated values, legal requirements, and mission intent. These systems are not unique in principle. They are representative of a wider movement across sectors toward built-in protections that operate by default, especially where failure carries real-world consequences.


What the name “Leo” may be signaling


Taken together, Pope Leo XIV’s choice of name appears deliberate, historically grounded, and explicitly connected—by Vatican officials and by the Pope himself—to Leo XIII’s response to industrial disruption. The parallel drawn between the industrial age and the AI era suggests a shared concern: how societies preserve dignity, agency, and trust when technology reshapes the conditions of everyday life.


Read this way, the recurring appearance of Leo at moments of upheaval points to a simple but demanding insight. When systems grow powerful enough to shape human lives at scale, good intentions are not enough. Protection has to be structural. Safeguards have to be part of how systems are built, not merely how they are judged after the fact.


That lesson was already visible in the age of steam. It may be even more urgent in the age of artificial intelligence.

Dr. Gosch Loy Ehlers III

Strategic Operations Leader Chief Operating Officer, The Proudfoot Group
Dr. Gosch Loy Ehlers III brings ChildSafe.dev's groundbreaking technology to the organizations that need it most. As Chief Operating Officer of the Proudfoot Group the commercial engine behind ChildSafe.dev and RoseShield Technology he transforms cutting edge child protection innovations into deployable solutions for government agencies, defense organizations, and enterprise clients worldwide. Drawing on three decades of military legal service and corporate leadership, Dr. Ehlers architects the operational frameworks, compliance structures, and scalability strategies that allow ChildSafe.dev to expand into highly regulated sectors. His expertise bridges the gap between innovative AI technology and the stringent requirements of federal, defense, and commercial markets ensuring ethical child safety solutions can reach every platform that serves young users.
Share with your community!

© 2025 ChildSafe.dev · Carlo Peaas Inc. All rights reserved.

Built with privacy-first, PII-free child protection.