AI, or the simulation of human intelligence by machines, was not long ago the stuff of speculative science fiction. It’s safe to say that the rapidly advancing AI of today far exceeds the dystopian imaginings of earlier generations—it can learn, reason, self-correct, and even engage in creative endeavors once considered uniquely human. AI’s involvement in everyday life is ever-evolving, with significant implications for how we work, live, and traverse fields from education to healthcare. As this powerful technology is incorporated into more services and products that we rely upon, here are some books that can help us to embrace human agency and navigate this new digital age.
Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.
Verity Harding, one of TIME’s 100 Most Influential People in AI, gives us hope that we, the people, can imbue AI with a deep intentionality that reflects our best values, ideals, and interests, and that serves the public good. AI will permeate our lives in unforeseeable ways, but it is clear that the shape of AI’s future—and of our own—cannot be left only to those building it. It is up to us to guide this technology away from our worst fears and toward a future that we can trust and believe in.
When we make decisions, our thinking is informed by societal norms, “guardrails” that guide our decisions, like the laws and rules that govern us. But what are good guardrails in today’s world of overwhelming information flows and increasingly powerful technologies, such as artificial intelligence? Based on the latest insights from the cognitive sciences, economics, and public policy, Guardrails offers a novel approach to shaping decisions by embracing human agency in its social context.
We are at a crossroads in history. If we hope to share our planet successfully with each other and the AI systems we are creating, we must reflect on who we are, how we got here, and where we are heading. The Importance of Being Educable puts forward a provocative new exploration of the extraordinary facility of humans to absorb and apply knowledge. The remarkable “educability” of the human brain can be understood as an information processing ability. It sets our species apart, enables the civilization we have, and gives us the power and potential to set our planet on a steady course. Yet it comes hand in hand with an insidious weakness. While we can readily absorb entire systems of thought about worlds of experience beyond our own, we struggle to judge correctly what information we should trust.
Artificial intelligence and machine learning are reshaping our world. Police forces use them to decide where to send police officers, judges to decide whom to release on bail, welfare agencies to decide which children are at risk of abuse, and Facebook and Google to rank content and distribute ads. In these spheres, and many others, powerful prediction tools are changing how decisions are made, narrowing opportunities for the exercise of judgment, empathy, and creativity. In Algorithms for the People, Josh Simons flips the narrative about how we govern these technologies. Instead of examining the impact of technology on democracy, he explores how to put democracy at the heart of AI governance.
We are at a monumental turning point in human history. AI is taking intelligence in new directions. The strongest human competitors in chess, go, and Jeopardy! have been beaten by AIs, and AI is getting more sophisticated by the day. Further, AI research is going inside the human brain itself, attempting to augment human minds. Susan Schneider, a philosopher, argues that these undertakings must not be attempted without a richer understanding of the nature of the mind. An insufficient grasp of the underlying philosophical issues could undermine the use of AI and brain enhancement technology, bringing about the demise or suffering of conscious beings. Examining the philosophical questions lying beneath the algorithms, Schneider takes on AI’s thorniest implications.
A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies.
How does a neural network become a brain? While neurobiologists investigate how nature accomplishes this feat, computer scientists interested in artificial intelligence strive to achieve this through technology. The Self-Assembling Brain tells the stories of both fields, exploring the historical and modern approaches taken by the scientists pursuing answers to the quandary: What information is necessary to make an intelligent neural network?
On May 11, 1997, millions worldwide heard news of a stunning victory, as a machine defeated the defending world chess champion, Garry Kasparov. Behind Deep Blue tells the inside story of the quest to create the mother of all chess machines and what happened at the two historic Deep Blue vs. Kasparov matches. Feng-hsiung Hsu, the system architect of Deep Blue, reveals how a modest student project started at Carnegie Mellon in 1985 led to the production of a multimillion-dollar supercomputer.
With the rapid development of artificial intelligence and labor-saving technologies like self-checkouts and automated factories, the future of work has never been more uncertain, and even jobs requiring high levels of human interaction are no longer safe. The Last Human Job explores the human connections that underlie our work, arguing that what people do for each other in these settings is valuable and worth preserving.