Urs Gasser and Viktor Mayer-Schönberger on Guardrails

Urs Gasser and Viktor Mayer-Schönberger on Guardrails

Scroll to Article Content

In Guardrails: Guiding Human Decisions in the Age of AI, Urs Gasser and Viktor Mayer-Schönberger show how the quick embrace of technological solutions can lead to results we don’t always want, and they explain how society itself can provide guardrails more suited to the digital age, ones that empower individual choice while accounting for the social good, encourage flexibility in the face of changing circumstances, and ultimately help us to make better decisions as we tackle the most daunting problems of our times, such as global injustice and climate change.


Your book Guardrails opens with a pilot’s fateful decision to trust a human rather than a machine—and the resulting death of over seventy people, including scores of children. Are you suggesting we’d be better off delegating our decision-making to a technology such as AI?

UG: It’s true: over the past decades, we have discovered numerous flaws such as cognitive biases in human decision-making. However hard we try we can’t avoid completely the risk of making bad choice – not only as pilots, but each of us. So, using technology to improve our decision-making, or even delegating decisions to an AI seems like a prudent option. At the same time, though, individual agency—our perceived capacity to choose—is foundational for many of us to what it means to be human. That’s the dilemma: we really want to be in charge and decide, even though we are often bad at it. AI may perhaps make better choices, but we’d lose agency—what we really care about.

Its AI that causes this challenge?

VMS: No, but it amplifies what’s at stake. The dilemma has been with us since the beginning of humanity. And societies have responded to it with a diverse set of means to influence our individual decision making. Think, for instance, of social norms, formal laws, and subtle nudges. We call them guardrails. And our book is about how to get them right. Because improving individual decision making without destroying human agency is the single most important mission as we face so many grave challenges, from climate change to social polarization to the unravelling of a unipolar world.  Simply putting a technology in charge won’t help if we do not know how we want individual decisions to be shaped. 

Sure, but don’t we want people to make good decisions? Why should that be so hard? 

UG: Because what a good decision is depends on one’s values and preferences. If society were to simply impose its values on our choices, we’d have no agency anymore. That’s why we need design principles for such guardrails, rather than a finished blueprint that let people quite literally run on autopilot. In our book we propose four such principles. Our aim is to refocus the debate—away from specific means to presumably escape the dilemma, such as technical band aids, towards a recognition of what’s really needed. When we apply these design principles, we realize that humanity’s toolkit to influence individual decisions is much bigger, more diverse, and frankly far more useful than a single technology.

So AI is our enemy? Are you against using technology to fix biased human decision-making?

VMS: We are not against a particular technology, we are simply skeptical of too narrow a view of what’s needed to guide human decision-making, of pushing full speed without checking that we are going in the right direction. That’s where the design principles come to bear: they’ll enable us to turn AI into the an indispensable element of a broader set of tools to improve our decisions, yet protect human agency.

Both of you have written extensively on some of the key features of our digital age. Obviously, your book is taking a normative look at governance—what kind of guardrails we need. Help us understand how you are shifting our governance perspective here?

UG: In the 1990s, when the Internet became really popular, we focused on network governance, on who controls the network, or at least key elements of it. And later we looked at the immense power of digital platforms and how that requires societal oversight. More recently, we have understood the crucial role that data plays—from revealing our innermost secrets and preferences to being the raw material that’s AI’s training data. Our focus shifted from network governance to data governance. Many companies and governments today employ chief data officers, who set internal rules of data use. Policy makers around the world have imposed regulatory frameworks of data governance. The EU has even passed its own “Data Act”. But as important network and data governance is, such perspectives are incomplete. They are largely silent about this key human quality, our ability to choose. If we want to keep our seat at the table of evolution and retain relevance as a species, we need to focus on decision governance, the design of all the diverse guardrails that shape decision-making. Decision governance is not just the next hype, it’s the single most important lever we have in influencing the trajectory of our future.

Who will benefit from reading your book?

VMS: It might be tempting to suggest that this is a book for policy makers, perhaps for technologists. But we quite deliberately cast our net much wider. The stories and examples in our book go far beyond the digital realm. By focusing on how to design guardrails for decision making, we appeal to everyone who benefits from good decisions, whether in a company or an organization, or as an individual in society. Because with the right design principles in mind, we all can create guardrails that improve human decisions. Given the challenges we face, we can’t start fast enough.


Urs Gasser is professor of public policy, governance, and innovative technology and dean of the School of Social Sciences and Technology at the Technical University of Munich.

Viktor Mayer-Schönberger is professor of internet governance and regulation at the University of Oxford.