Where do We Go Now, Siri?

Published as part of the National Gallery of Victoria’s Art Triennial 2020 five-volume publication (December 2020). Republished here with permission.

Ellen Broad
17 min readJan 8, 2021

--

In December 2020 the National Gallery of Victoria launched its second Art Triennial, both onsite in Melbourne and with glimpses of the projects online. It looks incredible, and I’m hoping (pandemic permitting) to be able to visit in person before it closes in April. What follows is an essay the NGV commissioned as part of their triennial publication, exploring the role digital technologies play in our lives and the influence and impact of artificial intelligence.

Images of the Art Triennial publication (source: NGV)

I wrote it in April or May 2020, and so the COVID-19 pandemic that had uprooted our lives was, unsurprisingly, at the forefront of my thoughts. Its influence is evident from the moment of the opening lines of the essay. But after submitting it and as my city emerged from lock down, I grew worried that it was too preoccupied with the pandemic, drawing connections and positing potential for lasting changes to how we view and govern digital technologies and AI in ways that in six months time, would look hopelessly unrealistic. Maybe everything would go back to normal, and this pandemic I felt was significant while in lockdown, sitting in front of a computer screen for ten hours a day, doom-scrolling social media and wondering when I would be able to see my family again, would actually barely register as a footnote in history.

Now, rereading the essay in this first week of 2021, it feels the opposite: too tentative, almost naive. Throughout, it is as though I am trying to capture and observe the potential ripple effects of a tidal wave from some distance away, pretending that I’m not still being tossed and turned in its churn. At the end of it, I wrote:

By the time you read this, lockdown restrictions will have eased, and in the ebb and flow of life in the shadow of the coronavirus, we will begin to re-orient ourselves.

Now I wonder who the “you” is that I was imagining. It’s January 2021, and around the world cities are still in lock down. In Australia, complex and disjointed internal border permit processes have sprung up in nearly every state and territory, as states raced to respond to a COVID-19 outbreak originating in Sydney’s northern beaches over Christmas. Yesterday, Trump supporters ransacked the US Capitol in footage live streamed around the world. Trump was kicked off Twitter, Facebook and Instagram, with the latter two platforms indicating they will uphold his suspension until at least the end of his Presidency. Yesterday my social media feeds were a swirl of screen shots and media grabs decrying the violence. Today there are memes encouraging me to “pick my fighter” from images of a variety of Capitol rioters.

I am unmoored. Re-orientation feels a long way off. Nonetheless, I’m sharing the essay here for a sense of the changes occurring in the early months of 2020 that seemed important to me at the time. In hindsight, many of these shifts simply gathered momentum throughout the year, and now the question for 2021 and beyond seems not so much which path to choose (as though we could choose one), but where the path we are barrelling down will lead us, and what it is we hope we encounter and discover. My brain is a fog right now. I hope wherever you are in the world and whatever state your own brain is in, you can see glimmers of things that are new and hopeful on your horizon.

Where do we go now, Siri?

In the weeks following September 11, billionaire Warren Buffett wrote to investors, ‘You only find out who is swimming naked when the tide goes out’. In times of profound upheaval, he observed, weak links and shaky foundations are exposed — across business, public institutions and society. What once seemed fixed becomes unfixed. We find ourselves renegotiating every relationship we have: with our workplaces, our schools, our governments and our families. At the same time, new channels beneath the surface of our lives are exposed, as are shipwrecks and rocks to route around. When the tide goes out, there is a chance to change direction. But where do we go?

Three months ago, it felt like the current propelling forward discussions of artificial intelligence (AI) was weakening. The breathless excitement of the early 2010s, when it seemed as though most of the world’s problems could be solved with massive volumes of data and high-powered computers, had been subsiding. Connecting the whole world no longer seemed as unambiguously thrilling.

Anxieties about robots rendering humans redundant seemed to have calmed down, too. Five years ago, it wasn’t unusual to see headlines predicting AI would cause massive job losses across wide swathes of the economy. People were urged to look up estimates online of how long it would take for their job to be replaced. Though anxiety replaced excitement, the sense remained that machine intelligence surpassing human intelligence was not only possible, but also inevitable. No matter the context, no matter the technology, no matter the complexity associated with articulating what ‘intelligence’ meant, AI would simply end up being better.

Then came the stories about AI making mistakes and misunderstanding context. Stories about bias in automated systems, and machines making catastrophic errors.

In 2016, public-interest journalism organisation ProPublica published its investigation into the Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, algorithm, a proprietary system used in parole and sentencing assessments across the United States. ProPublica demonstrated that COMPAS consistently overestimated black defendants as being at high risk of reoffending, compared to its predictions of reoffending by white defendants. Its investigation, and the public debate it ignited with Northpointe, the company behind COMPAS, launched an entire sub-field in machine learning dedicated to quantitative measures of ‘fair’ algorithms.

Between 2018 and 2019, two fatal accidents involving Boeing 737 MAX aircraft were traced to faulty sensors and shoddy software, reigniting concerns about the safety of automated systems. Critics asked whether humans were being engineered out of systems, unable to take back control when AI went wrong.

In the United States, research led by computer scientists Joy Buolamwini and Dr. Timnit Gebru investigating facial-recognition software unearthed persistent problems identifying non-white, non-male faces, increasing concerns about its use by law enforcement. Driverless cars ­– which The Guardian had predicted in 2015 would put us all in the back seat by 2020 — were still struggling to move beyond test zones: too many human motorists, cyclists and pedestrians; too much confusing wildlife. In 2018, a pedestrian called Elaine Herzberg was killed by an autonomous Uber test vehicle while pushing her bike across the road in Tempe, Arizona. A year later, Uber’s chief scientist, Raquel Urtasun, conceded that people would be waiting a long time for driverless cars to be on the road, at scale, around the world.

Stories like these made room for conversations about designing technologies more responsibly and ethically. Governments began inquiries into the social and legal implications of new technologies, eliciting concerns around bias and error in AI. Around the world, new think tanks and institutes proliferated, with names calling for more Responsible Technology, more Humane Technology, AI for Good. Selling ‘ethical AI’ solutions (using AI) became its own business model. People like me were invited to write and speak more and more about the human designers behind AI systems, about expectations of greater care and conscientiousness, about the need for real scrutiny, and real consequences.

2020 began with an intensifying focus on bad AI practices, and changing expectations. There was a flurry of stories, led by The New York Times, about Clearview AI, a murky, untested and unregulated company selling ethically (and legally) dubious facial recognition services to law enforcement agencies. At Davos, the annual meeting of the World Economic Forum in the Swiss Alps, multinational tech companies talked about ‘guiding regulation’ of AI. On 19 February 2020, the European Commission launched a white paper titled ‘On artificial intelligence — a European approach to excellence and trust’, committing to a framework for more trustworthy AI built on ‘European values’.

Two days later, the first European cases of COVID-19 were confirmed.

In late March 2020, Justin E. H. Smith, a professor of history and philosophy of science in Paris, published an essay on The Point called ‘It’s all just beginning’. It had been less than a week since cities across Australia had closed their schools, their restaurants, their theatres. In Italy, nearly 10,000 people were dead from COVID-19.

At the Australian National University, we packed up our offices, sticking signs on doors and windows informing anyone walking past that we had left the building and that (uselessly, for anyone trying) we would not be answering our phones. I watched a colleague place the building’s communal pot plants in her car: ‘I don’t know when we’ll be back again’, she said.

‘We are all going to have to rethink everything’, Smith wrote in his essay.

“These are not the end times, but nor are they business as usual, and we would do well to understand that not only is there room for a middle path between these, but indeed there is an absolute necessity that we begin our voyage down that path.”

At that point, I was rethinking AI altogether. There was no middle path. In the wake of the coronavirus, all the conversations about AI felt like flotsam, distractions. I felt like I was one of the ones caught out swimming naked: even though my work had been about building more responsible foundations for AI, I benefited from the flotsam. ‘The spectacle of innovation’, a colleague called it. I was tired of it. I was also tied to it.

As the pandemic spread across continents, the old breathless excitement about AI resurfaced. AI could act as a surrogate doctor, detecting pneumonia from CT scans. It would predict the transmission of disease faster than any human contact tracing process. ‘Pandemic drones’ would monitor citizens on the street, recording sneezes and high temperatures. Some blueprints for contact-tracing technologies, birthed from computer science with no lineage in public health, imagined automating the contact tracing process altogether. Untried, untested technologies were being proposed and deployed at speed by governments racing to respond to the pandemic, while reporters and researchers struggled to sort the most promising possibilities from the implausible.

I started Googling ‘mid career change medical doctor’.

While stories about machines and automata that come to life have been around for centuries, the term ‘artificial intelligence’ was only coined in 1956, at a workshop at Dartmouth College, to describe a new research agenda for building reasoning machines. At the time, ‘artificial intelligence’ was chosen mainly to get around existing terms — like cybernetics and information processing — that would have chained researchers to the ambitions and politics of existing disciplines and philosophies. The new field of artificial intelligence, workshop founders wrote, would be based on the philosophy that ‘every aspect of learning or any other feature of intelligence, can in principle be so precisely described that a machine can be made to simulate it’.

With the Cold War in the background, it was natural that one of the new field’s early focuses was machine translation: intercepting Russian intelligence and scientific research, and translating that intelligence, faster and more accurately than human translators. Millions of dollars were invested in laboratories dedicated to solving machine translation. But by the late 1960s, excitement had given way to frustration and scepticism. Government and industry funders pulled their investments. The first ‘AI winter’ had arrived. Since then, for every period of great promise, an AI winter has followed. Coming into 2020, as stories of over-hyped, error-prone, biased systems built steam, some industry figures warned of another AI winter to come.

Today, the phrase ‘artificial intelligence’ is mainly a marketing term. Whether you’re trying to sell a story, a conference, an idea or a service, ‘artificial intelligence’ transfixes. It both conceals and simplifies. It fires the imagination. When it comes to describing what a service or an idea actually is or does, there’s usually a more mundane term to explain it, each with its own field and subfields. Machine learning. Robotics. Virtual reality. Data mining. Technologies that already exist, many of which we use every day, use a range of computational techniques that fall under the umbrella of artificial intelligence, but which in practice aren’t called ‘artificial intelligence’. We call them search engines, drones, web stores, streaming platforms, social networks, voice assistants. There is no thing that is artificial intelligence. Artificial intelligence is the promise of something that hasn’t been invented yet.

Cognitive scientist Marvin Minsky, one of the organisers of that Dartmouth workshop, later coined the term ‘suitcase words’. Suitcase words contain a multitude of ideas, disciplines, definitions and approaches — words like ‘consciousness’, or ‘intuition’. At the time, Minsky was thinking of these words as words to be unpacked, like a suitcase, and in the process a precise number of interpretations identified, and separated: a key part of the process of teaching a machine to intuit, or giving it consciousness. Over time, ‘suitcase word’ has become a useful label for any number of expansive, mystifying terms. ‘Artificial intelligence’ is a suitcase word: it evokes a range of disciplines, tools, techniques and computational systems, but also our fears of living machines, our anxieties about replacement and irrelevance, and aspirations to mastery of what it means to be ‘human’.

A few smart moves to banish the mid-career doldrums’, the Financial Times announced in May, nearly two months into lockdown. Futurist David Bodanis cautioned against what he termed ‘the midlife crisis gambit’, brought on by economic and social upheaval. Better instead, he advised, to shift direction more incrementally and strategically, like a knight on a chessboard, and reconsider your domain from a fresh perspective.

My husband, worried about my sudden interest in going to medical school, emailed me the article.

For years now, a range of researchers, institutes and advocacy groups have been beseeching people to shift their focus from the technology itself to the humans behind the technology, and the world within which it exists. This isn’t new thinking. It predates the field of artificial intelligence. The idea that technical systems couldn’t be divorced from the human and environmental systems that created them (and that they shaped in turn) was core to cybernetics, one of the disciplines from which AI emerged. Indeed, one theory as to why ‘artificial intelligence’ became the preferred term for the 1956 Dartmouth conference was to avoid having to give deference to Norbert Wiener, the opinionated founder of cybernetics.

There was a time when cybernetics preoccupied mathematicians, computer scientists, psychologists, anthropologists, writers, filmmakers, journalists and policy makers alike. Like ‘artificial intelligence’, ‘cybernetics’ was more than a set of methods — it was a philosophy. It gave rise to sub-disciplines, like mechatronics and systems engineering. But for whatever reason, its reflections on relationships between technical systems, and the human and environmental systems they sat within and interacted with, didn’t have as great an influence on the design and teaching of computing technologies, as computer science became a discipline in its own right. Over the 1970s and 80s, the focus turned inwards. Building faster, more efficient, mechanised systems, at scale, became the design philosophy shaping most applications of AI. The humans designing AI — their perspectives, values, frailties — were hidden away behind the screen, like the Wizard of the magical Land of Oz. Even through the cycles of AI springs and AI winters, as stories of error, misuse and disappointment followed stories of scientific breakthrough, the focus stayed on the technology itself, rather than the human, institutional and environmental forces shaping it. That is, perhaps, until now.

Among the stories about the potential for AI to trace, predict and combat the spread of the coronavirus during lockdown this year, glimpses of more complex, human-intensive systems emerged. Some of the largest technology companies in the world were forced to admit that their automated systems, deployed at scale, still relied on human labour to perform effectively. Until the pandemic that labour was typically treated as tangential, not essential, to how well their systems worked.

In March, Facebook sent its 15,000 human content moderators home. Human content moderators, paid as contractors to monitor and remove illegal and offensive content, are used across every major platform on which people can publish, share and consume content instantaneously at scale. They work alongside — but usually in the shadow of — automated content moderation algorithms. Facing a pandemic and the temporary loss of its human workforce, Facebook CEO Mark Zuckerberg warned platform users to expect more mistakes moderating content. The algorithmic content moderators were too blunt. They struggled with context.

Humans, the pandemic made clear, are still ultimately driving the complex, muddy work of online content moderation. Of course, despite this, human content moderators face more precarious working conditions, serious health issues and lower pay than the engineers tasked with designing their automated moderation counterparts. During lockdown, news emerged that Facebook had agreed to pay US$42 million to its human content moderators as compensation for mental-health issues developed in their work.

The lockdowns around the world created abrupt and seismic changes in human behaviour — and these stymied automated systems, too. Product distribution companies reported that automated inventory systems were wildly confused by panic buying. Amazon, in efforts to ease pressure on its own warehouses, which were struggling to fulfil orders, manually changed its buy algorithm to incentivise purchasing from suppliers shipping from their own stores. A credit-card company using AI to detect fraud had to step in and tweak its service, to account for the rise in purchases of power tools and gardening supplies. At a GM manufacturing facility in Indiana that had been retrofitted for the rapid construction of ventilators, human engineers formed an assembly line mounting circuit boards and plugging in hoses. ‘Because of the urgency and speed’ required to respond to the coronavirus, GM’s head of global manufacturing told The New Yorker, ‘automation was kept down’.

The conditions that have allowed large technology companies to flourish rest on an explicit expression of the idea that technology is neutral, and that speed, scale and efficiency are desirable and unchallengeable. In the United States in 1996, the Communications Decency Act introduced Section 230, which made internet platforms immune from liability for the content users posted on their platforms.

While Section 230 is explicitly focused on online content distribution, the ethos it enshrines — that the makers of a digital technology should not be held responsible for how it is used by others — has shaped the industry’s prevailing approach to the design of technology (and the limits of their accountability). Without expectations that designers would be held responsible for harms that are indirectly made possible by their technologies, organisations have been free to focus on maximising speed, scale and efficiency. Everything can — and should — be automated. ‘Every aspect of learning — any other feature of intelligence — can in principle be so precisely described that a machine can be made to simulate it’, as the fathers of AI put forward in the proposal for the 1956 conference. What humans do with that speed and scale, and the ways speed and scale could prove time consuming or costly, was not something for a system’s human creators to adjudicate.

Over the years, we have been coming to grips with some of the consequences of this philosophy. There’s growing awareness that speed and scale haven’t just given us more efficient machines, and more instantaneous communication and consumption. They have also fuelled radicalisation, new forms of harassment and human exploitation.

When the tide goes out, there is a brief moment to pause, survey the landscape and reconsider your direction. At times, over the past few months, there have been glimpses of changing societal expectations of a more aware, more human AI industry. There have also been glimpses of troubling new kinds of reliance on shadowy technologies.

Around the world, with workplaces and schools closed down for weeks, citizens were pushed into new forms of technology dependency. At the same time they were exposed to more sinister aspects of platforms predicated on scale and speed, without comprehensive safety practices.

Video platforms, for example, gained millions of new users overnight. ‘Zoom bombing’ — the phenomenon of uninvited participants joining online video calls — skyrocketed. In one Queensland virtual classroom, eleven-year-olds were twice interrupted by a hacker showing graphic pornography. In the United States, the FBI reported more than 195 Zoom-bombing attacks, where people on calls were exposed to explicit child sexual abuse material.

‘We have had missteps’, Zoom CEO Eric Yuan said. These kinds of attacks could not have been foreseen, the company argued, as their service — designed for corporate videoconferencing — became home to virtual classrooms, weddings, courtrooms, dinner parties, funerals, soccer matches, theatres and places of worship. ‘The risks, the misuse,’ the company wrote in a statement, ‘we never thought about that’. Then Dropbox engineers came out and said they had alerted Zoom to serious vulnerabilities in its infrastructure two years earlier. The New York attorney-general opened an investigation into its privacy and security practices. Tens of thousands of words across media websites were dedicated to whether Zoom could have predicted the abuse of its service, and whether imagining and mitigating patterns of misuse and abuse, not just gap-filling as issues arose, was a core responsibility of a technology designer.

Elsewhere, the pandemic has forced organisations that have so far preferred to pretend they have no influence over the behaviour of users to take on greater responsibilities. In early March, Twitter announced efforts to ensure citizens would only be shown trusted, authoritative sources of information about the pandemic on their platform, and to use AI to try to identify and remove disinformation. It started shutting down thousands of bot accounts. When US president Donald Trump tweeted debunked conspiracy theories about mail-vote fraud, the platform began adding footnotes to his tweets that directed people to trusted sources of information about mail voting. Outraged, the President issued his Executive Order on Preventing Online Censorship, threatening to narrow the protections platforms enjoy under Section 230, the ‘twenty-six words that created the Internet’.

Trump’s intent was not a more civic-minded infrastructure. He wanted to put an end to the perceived censorship of far-right commentators on social media platforms, who decried limitations on their ability to promulgate racist, sexist, conspiracy-fuelling content. Some advocacy groups condemned Twitter for doing too little, too late. But in acting, and igniting debate about legal infrastructure that supported technology companies doing nothing, human decisions and systems shaping AI were thrust squarely into the spotlight. A seemingly fixed philosophy, it turned out, could be unfixed. Perhaps the norms and values shaping AI systems are up for renegotiation, after years of pretending there was no possibility of negotiation at all.

This could be the middle path, perhaps, that Smith hinted at in the first tumultuous weeks of the pandemic. Quitting AI as a field altogether no longer seems that sensible. I’ve stopped looking at medical school admission questions. There exists an opportunity, perhaps, to negotiate a more human-centred direction for artificial intelligence.

The etymological origins of cybernetics, one of the fields that gave shape to artificial intelligence, are found in the Greek term kybernetes, meaning to steward, navigate or govern. Perhaps in the new world that emerges from the pandemic, we will expect and demand that designers of AI behave less like speedboats and more like stewards, responsible not only for keeping us moving forward, but listening to our voices when they seek a change of course, and staying away from the rocks. This is the philosophy being put into practice by the organisation I’m a part of, the 3A Institute at the Australian National University, founded by Distinguished Professor Genevieve Bell, as it works to bring a new branch of engineering into existence that is shaped around designing AI systems safely, sustainably and responsibly.

There are still signs that we could return to business as usual. Even as the possibility of renegotiating the principles on which AI is built is opened up to us, there’s added incentive to push out AI solutions fast to counter pandemic-related problems. Investment in ‘fever detection’ software and infrastructure at shopping centres and in public spaces is ballooning, even as experts express scepticism about their effectiveness. Countries are investing in mobile infrastructure to record COVID-19 immunity certificates and surveil people quarantining inside their homes. My own university is exploring using AI systems to monitor students via webcams for physical movements that could indicate cheating, as they grapple with what it means to ask thousands of students to sit exams remotely.

The effectiveness of these systems has not been proven. They’re brand new, untested. They could be carefully designed, evaluated and monitored. Or they could be flotsam, distractions. Sandcastles pretending to be palaces.

In the coming weeks and months, the tide will come in again. By the time you read this, lockdown restrictions will have eased, and in the ebb and flow of life in the shadow of the coronavirus, we will begin to re-orient ourselves.

Where will we find ourselves, a year from now? Right now, it feels as though we could go in any number of directions. Everything is up for negotiation, including the values and philosophies that shape the technologies we interact with, what we think those technologies say about what it means to be human, and how we expect them to reflect our humanity. Much is uncertain. But in the spaciousness of uncertainty, as writer and historian Rebecca Solnit has observed, is room to act. To choose a different path. I hope I meet you there.

--

--

Ellen Broad

ellenbroad.com. 3A Institute, Australian National University. Data ethics | open data | responsible technology. Board game whisperer @datopolis.