Trust has an ineffable quality that resists most of our attempts to quantify it. It’s not a sin and it’s not always a virtue. It can be good. It can be blindly foolish. We throw it out half-jokingly to people we’ve never met before to try to put them at their ease. It’s OK, I trust you. We use it about a brand of soap, and about God. We reproach people for failing to live up to it. I trusted you. Very rarely do we simply say, with complete sincerity: I trust you. It goes better unsaid.
In the past millennium, humans have gradually expanded their circles of trust. At first, trusting relationships were interpersonal or with God. Then people began to trust institutions like governments, banks, and hospitals. In the past 150 or so years, they have come to trust brands, too. Now we are developing new ways to establish each other’s credibility, driven by artificial intelligence. If we are, as some people think, living in an Age of Uncertainty—dominated by fear of the climate and disease and an overwhelmed welfare state—then these new ways of establishing trust hold promise. But they also have the potential to provoke division, exclusion, and ultimately isolation.
For politicians and journalists, one of the most alarming developments of recent times is people’s growing trust in business (63%, according to the annual Edelman survey) compared with governments (51%) and the media (50%). Taken at face value, that means people are more likely to trust the ads that pay for the news than they trust the news itself, or the government they elected.
Partly, this is down to complexity. Businesses that sell a service and are clear about what they offer are more readily understood and trusted than a state that withholds things from some people and gives them to others, all in exchange for taxes people have no choice but to pay. What this means is that business has been able to foster trust much more easily than governments can. In fact, the trust-based systems we use most often are no longer controlled by the state. Most of us now use apps and email to communicate, and cards and phones to pay.
That’s no bad thing. The future that Margaret Atwood wrote about in The Handmaid’s Tale, where women’s bank accounts were abruptly frozen by the government, is less likely to come true. Few of us want business to do the bidding of the state, as it does in China. But as we are discovering, giving Big Tech responsibility for making the rules on freedom of speech isn’t very satisfactory either. Nor are tech companies very good at recognizing when their products harm people, or what they could do about it without breaking their business model.
Powerful though it is now, the trust we have in business may prove to be surprisingly thin. When catastrophes happen, as when COVID-19 struck in the spring of 2020, some businesses will quickly adapt to the new reality. But people will again turn to government to help them cope and know how to behave. Only then will we see whether institutional trust can hold up under extreme pressure.
People want to trust their neighbors and the institutions they rely on. We know this because those who have lost confidence in some aspect of government generally seek out a new community which they feel shares their values—whether in the real world or through social media and, potentially, the metaverse. And AI makes it easier than ever for them to do that. It offers the choices and autonomy that government does not. What chance does the democratic social contract have when so many more attractive contracts are available?
Read More: How the AI Revolution Will Reshape the World
But perhaps the most frightening future for AI—and one which has been very little discussed—is the enormous temptation to use it to make decisions we do not want to take full responsibility for. We may tell ourselves that we’re only delegating these decisions because AI makes them faster or better than we do. Why send in a soldier when a robot will do the same job? But in truth, in the case of killer weaponry like drones, we’re doing it to avoid the ethical and moral trade-offs that would otherwise have made us think twice about acting at all. How can we make finely balanced judgments about whether to kill an enemy when we feel protected from the moral and physical consequences of our actions?
One solution, argues the military technology specialist Sorin Matei, might be to program killer robots with a sense of their own vulnerability. It’s an elegant idea, but not one that will appeal to military commanders. And it relies on whether we can trust AI programmers to be able to simulate vulnerability. We’d also have to trust our opponents to play by the rules and not try to hack our killer robots and make them switch sides. In wartime, fair play goes out of the window.
The institutions that do manage to win public trust will be those that are frank about what they are trying to do with AI and how they will stop it from perpetuating—even worsening—the inequalities in democratic societies. Most importantly, they won’t use it as a cop-out to avoid taking personal responsibility for decisions that are unpleasant or complex.
Sometimes we would like to equate trust with perfect transparency. But as technology and AI become more complex, full transparency places an intolerable cognitive burden on us. It’s as if an airline could reassure its passengers by sending them the details of pre-flight checks. Trust demands a leap of faith—a leap that can only be made when the actions someone takes bear out the trust we want to place in them, and the repercussions of letting us down are severe.
But perhaps the most urgent thing governments should understand is that insecurity and fear breed distrust. Across the West, the past decade has seen a steady erosion of people’s confidence in the state. They no longer trust the health care sector, which is state-run in most rich economies except America, to treat them quickly enough, or the police to treat them fairly and respectfully. The law, as many see it, is administered in such a way as to favor the better-off. Meanwhile politicians, through the personal choices they make, have driven home the message that the state is inadequate.
This is having awful consequences. Those who can afford it increasingly hoard wealth, afraid that the state won’t pay for their needs when they are sick or old. There seems little point in paying into a common pot when the returns are uncertain. Chip away at the social contract and the basic expectation that the state will protect you from catastrophe, and people react fearfully. That was dimly understood during the pandemic, when older people were particularly vulnerable. The insight is already slipping away.
And crucially, a state whose people no longer trust it will not necessarily undergo radical or revolutionary change. The better-off may emigrate to places where they feel safer, or make themselves as independent from the state as they possibly can. They may give up on institutional trust, and instead try to revive societies based on interpersonal trust. They may establish walled-off communities online where they can find emotional or intellectual solace. For as long as they are healthy and can meet their physical needs, this may feel like enough.
Yet it is not necessarily too late to build up trust again. Most of us are still at the I trusted you stage of shock and disillusionment—still willing to be convinced that government could act to address the uncertainty we feel. The desire to place our trust in something that would deserve it will not go away. If a society fails to provide it, other states and virtual worlds will take the place of ours. And the subjects of these new worlds may find they have no power at all to shape them.
This is an adapted excerpt from The Future of Trust by Ros Taylor, published by Melville House (and part of the FUTURES series). Copyright (c) 2024 by Melville House.