Prof Robert Phillips: There is no greater moral risk in the world than diffused responsibility

Professor Robert Phillips is the George R. Gardiner Professor in Business Ethics and Professor of Sustainability at York University’s Schulich School of Business in Canada. Here he talks about diffusion and technology and what it means for society.
2022-05-23
by Robert Phillips (Gourlay professor)

People walking with blurred faces and squares drawn around heads

An autonomous vehicle swerves to avoid a crash likely to cause harm to the passengers, but instead hits a pedestrian on the sidewalk.

A social media platform develops an algorithm to detect inflammatory language, yet the platform continues to help organise violence.

Facial recognition software replicates common race-based biases in policing.

Devices with screens are designed to increase engagement in ways that strike many as akin to addiction.

People are hurt in each of these cases, but by whom? Or what? Are the algorithms responsible?

Moral questions have haunted innovation since at least Prometheus. But whereas responsibility for bringing fire (and travails) to humankind can be traced directly back to one individual, today’s technologies bear the fingerprints of myriad contributions. The mind-numbing numbers of people involved in the development, dissemination, and use of today’s technologies makes diffusing responsibility nearly irresistible. But there is also no greater moral risk in the modern world than diffused responsibility.

In the context of technology, diffusion typically refers to the speed at which a given technology suffuses throughout society. Information technology is particularly reliant on network effects of diffusion; as the number of users increases, so too does the value of a given platform. Armies of executives, developers, and behavioral scientists have dedicated national GDPs worth of resources to developing the algorithms that draw users in and keep them engaged for as long as possible. But who is responsible if these algorithms exploit cognitive or behavioral weaknesses? Who’s responsible if a technology is biased in ways that adversely affect minority groups? Who’s responsible if an autonomous vehicle makes a nano-second decision to hit a pedestrian in order to save the passenger? As technology is diffused, so too is responsibility.

We work together in organisations because we can do more in concert than we can as individuals. However, collective endeavors universally generate their own distinctive ethical challenges. When things go right, there are questions of who gets how much of the jointly created value. When things go wrong, assigning individual responsibility is even thornier. Beyond the sorts of fairness concerns in holding people accountable for what they did, things in organisations often go awry because something important wasn’t done. There are dangers in the interstices of team projects including the unintended interactions of previously distinct elements and, vitally, a failure to recognise something as a problem in the first place. Nowhere is this failure more pronounced than in new technologies.

Beyond holding other individual actors responsible, it is all too common to hear blame cast on the inanimate. Most of us have, at one time or another, felt the frustration of a ‘company policy’ with no soul to be damned and no body to be kicked. More recently we have added ‘algorithms’ to this list. But company policy, algorithms, and the other inanimate repositories of blame are artificial – they were created by humans and responsibility falls, ultimately, on their Frankensteinian shoulders.

Novel and advanced technologies have always generated moral anxiety and unanticipated consequences. In the past however, these anxieties were more isolated and the consequences were more contained. The specter of nuclear disaster plagued generations, but other than quickly and efficiently crawling under a desk, responsibility for the diffusion and effects of nuclear technologies were confined to a relatively small number of experts and political leaders. Responsibility for the technologies we are considering is particularly slippery due to their ubiquity.

It is hard enough to assign responsibility when the development team has hundreds or thousands of people working on specialised components. Add billions of users – and the remarkable human capacity to find new ways to (mis)use and abuse one another – and the list of blame-worthy others generates a most wicked problem indeed.

Commonly implicit in how many people think about responsibility is a sense of exculpation. That is, when someone responsible is found, everyone else is off the hook. But this is not how responsibility works. Responsibility is not a zero-sum game. Responsibility is infinitely divisible and there is more than enough to go around.

Technology itself cannot bear responsibility any more than the literal scapegoats of old. PEOPLE develop technology, PEOPLE buy, sell, use, and profit from technology, and only PEOPLE can be responsible.

Society’s capacity for avoiding, addressing, and ameliorating technology’s harms will rely on avoiding the easy temptation to diffuse responsibility onto others – or even onto the tech itself. As the diffusion of technology continues to reshape society, the diffusion of responsibility looms and menaces. Our willingness to share responsibility will determine whether technologies will be bound or unbound and how we understand modernity.

Professor Robert Phillips is one of the visiting professors currently in Australia for the Gourlay Ethics in Business week. Browse the free public events. 

 

Category: People

Related News