“Technology is people: built by people, managed by people, regulated by people, used by people. And all of us should have a say in what happens with it.” —Verity Harding
Verity Harding is a globally recognized leader at the intersection of technology, politics and public policy. Recently named by TIME Magazine as one of the 100 most influential people working in AI, Harding directs the AI & Geopolitics Project at Cambridge University’s Bennett Institute for Public Policy and is the author of the forthcoming AI Needs You: How We Can Chance AI’s Future and Save Our Own. Ahead of the book’s publication, Harding was profiled by Tom Tivnan in The Bookseller and discussed why she wrote the book, what past scientific breakthrough might teach us about the future of AI, and why its critical for the public to have a say in what this future looks like. The below is reprinted with the permission of The Bookseller:
Verity Harding in conversation with The Bookseller about the future of AI
By Tom Tivnan
To have a sensible debate on Artificial Intelligence that ensures the needs of society as a whole are just as—and even more—important than Big Tech’s, Verity Harding admits we might have to address “the Skynet problem”.
Skynet is the fictional AI in “The Terminator” and subsequent films in the franchise which when it “gains awareness” unleashes nuclear attacks on humankind and as a consequence (perhaps equally as devastating) Arnold Schwarzenegger’s career as a movie A-lister. That “AI is going to kill us all” view has become not just the stuff of Hollywood and science fiction writers but has seeped into the journalism around the issue.
Harding notes that reporting has come full circle since she first came into the AI world in 2014. She explains: “Back then you had Elon Musk and Stephen Hawking saying things about how dangerous AI was and that became the story.
But after that period there was a lot of fantastic journalism looking at AI more through the lens of public good, trust and ethics. But what we’ve seen happen over the past 18 months or so is a renewed push that AI is somehow going to take over the world and kill everybody. To be fair, that view has been pushed by some tech experts, who are as much to blame as the media, and who have really skewed the debate in an unhelpful way.”
Harding does not shirk from the existential dangers machine learning could potentially pose in her first book, AI Needs You: How We Can Change AI’s Future and Save Our Own—for example, there is a fascinating if somewhat hair-raising section on how the technology may affect weapons, warfare and the war on terror—but she believes that is a side issue. Instead, AI Needs You is a manifesto and call to arms to get the public and politicians more engaged in what might be the most transformative technology for humans since the splitting of the atom, and maybe since harnessing electricity.
There are probably few people more qualified to write about how society, governments and Silicon Valley can come together to discuss the ramifications of AI. Harding’s working life started in politics as a special assistant to Nick Clegg when he was deputy prime minister after which she worked for Google, first advising on European security issues before in 2016 becoming head of policy at the tech giant’s AI research arm, DeepMind. She left Google in 2020 to set up her own tech consultancy, while becoming director of Cambridge University’s AI & Geopolitics Project. In September, Time magazine named her one of the world’s most influential people in AI.
Harding wrote the book in part because “I realise I’ve had this very privileged career at the centre of both politics and technology. So, I thought I could bring a huge amount of insight and context to the current moment. And I wanted to share that with a wider group of people with the hope that that would enable more people to become involved in the discussion. AI is incredibly important and I want a broader range involved in the discussion, not less.”
A big emphasis of AI Needs You is bringing in examples of how transformative technologies have been handled in the past which could be used as a template for AI’s future. Harding says: “Something that is really missing from the current AI discourse in the tech industry is a deep and wide political analysis, and a historical context, that could help with how we got here and what we could possibly do next. I studied history at university, so I always look at things through that historical lens.”
For example, there is a fascinating chapter centred around the 1967 United Nations Treaty on the Peaceful Uses of Outer Space, which the United States and the Soviet Union signed up to despite being locked in the Cold War and a race to land on the moon. Almost 60 years on, that agreement is still in place and the basis for global space law that regulates everything from nations’ satellite usage to missions to Mars.
Harding says: “There is this narrative currently that AI is some arms race that must be won. There is some truth to that, and it’s a big fight, and can seem frightening. But what I was trying to show in that space chapter was that the world was pretty frightening back then. The Cold War had a very real threat of nuclear war, and space travel itself, after all, was technology based on weapons developed in the Second World War. And yet, progress was able to be made—the world was able to come together for the greater public good.”
It should be noted that while a decent portion of the book is geared at that cross-section where governments and Big Tech meet on AI, most of it is aimed at galvanising members of the public to think of ways they can engage in affecting the technology’s future. Harding says: “[US vice-president] Kamala Harris made a good point recently about yes, we can look at the existential risk to all humans of AI in the future, but it’s the existential crises in people’s lives now that are really just as important: it’s an existential crisis for someone who loses their job because of AI, or an existential crisis for a woman who has been abused by deepfake revenge pornography. Those people on the sharp end of AI need to have as much of a voice as those in Silicon Valley.”
Harding was born and raised in Bournemouth—“a lovely place to grow up; not very mind-expanding”—read modern history at Oxford and later did a history and politics fellowship at Harvard. She did not really yearn to get into politics but a friend suggested she would be a good fit to intern for an MP, which turned out to be Clegg when he was the Liberal Democrats’ shadow home affairs spokesman. She remains close to Clegg who, of course, after politics also moved into tech and is now Meta’s (née Facebook’s) president of global affairs.
The difference between Westminster and Silicon Valley, Harding says, is that “MPs are more grounded in reality. I know that may sound strange to some people, but politicians are meeting people from all walks of life all the time. And they have their hands on the levers of power and therefore they know what is and what is not possible. The tech industry has this pervasive West Coast libertarian hubristic streak that makes them think they have the ability to just do something immediately and change everything.”
Harding pauses and adds: “I wanted to write the book for those not in tech to feel empowered—that AI isn’t change that’s just coming towards us like a train, and we have to passively receive it and accept it. Technology is people: built by people, managed by people, regulated by people, used by people. And all of us should have a say in what happens with it.”