Listening to nature is an ancient art. But for most of human history, our ability to listen to other species was constrained. Humans are unable to hear many of the myriad sounds made by other species. Bats are some of the most talkative creatures in the animal kingdom, but their ultrasonic cries are largely above human hearing range. Elephants and whales are some of the loudest creatures on Earth, but their low frequency infrasonic calls are inaudible to the naked human ear.
Discoveries of non-human sound in the 20th century were controversial. The scientists who discovered bat, elephant, and honeybee vocalizations were by turns laughed at, sworn at, shaken by the lapels at conferences, and deprived of funding. Many of their colleagues held to the view that most creatures neither made nor heard sound. As it turns out, however, Western scientists were the ones that were hard of hearing.
Using digital devices no larger than a smartphone, scientists are now eavesdropping on Earth, from the Arctic to the Amazon. Digital listening networks are being installed, some the size of a small pond, others stretching across entire oceans. These recording devices listen discreetly and continuously, even at night, across the full range of sonic frequencies and in places that are difficult for humans to reach. The recordings confirm that the natural world is alive with sounds made by animals, insects, and even plants.
By using artificial intelligence to analyze the tsunami of bioacoustics data now being generated, scientists are learning some remarkable things. Entirely new species have been discovered; in the depths of the Indian Ocean, a new population of blue whales previously unknown to science was revealed by its unique songs. Species previously thought to have gone extinct have been heard, giving hope to conservationists; a camera can only spot animals walking down the path, but a digital recorder hears them hiding in the bushes.
Scientists have also begun to use digital technologies to decode non-human vocal communication. Deciphering individual bat sounds from the cacophony of a cave is impossible for a human listener but relatively easy for a trained artificial intelligence algorithm. Using digital bioacoustics, researchers have found an expanding list of species that refer to one another with individual names (dolphins, belugas, and bats). Some species also use specific referential signals, just like humans use words. Elephants have specific vocal signals for “honeybee” and “human”, and their vocalizations are so nuanced that specific signals are used to differentiate between threatening hunters and non-threatening passersby. Turtles utter unique vocal signals at the moment before they hatch, which scientists believe are used to coordinate the mass births for which they are famous. These findings lend weight to long-debated, controversial claims about the existence of language in non-human species.
Even more astonishing, scientists have demonstrated that species without any apparent means of hearing are also responsive to sound. Coral and fish larvae navigate across miles of open ocean to healthy reefs and show a preference for their home reefs. Although scientists do not yet know how, these tiny creatures imprint on the sonic signature of their natal reef at the moment of their birth, like a marine lullaby.
Other experiments demonstrate that some plants make infrasonic sounds at frequencies audible to insects and bats. In one fascinating experiment, flowers increased their production of nectar when exposed to the sound of buzzing bees, flooding themselves with sweetness as if in anticipation. In another experiment, plant seedlings grew their roots towards the sound of running water, even though no moisture gradient was present.
Non-humans communicate complex information through sound. Does this mean that, one day, humans may perhaps converse with other species? Digital technologies may provide an answer to the question. Scientists are now using artificial intelligence to attempt to break the barrier of interspecies communication. One team is building a dictionary in East African Elephant. Two other teams are working on Sperm Whalish. Google Translate does not feature Western Australian Dolphin yet, but perhaps one day it will.
Like many digital innovations, this research is generating controversy. While digital bioacousticians often work with citizen scientists via crowdsourcing apps, Big Tech companies are also involved in acoustic data harvesting. Large audio datasets of non-humans are useful for testing natural language processing algorithms, but this often occurs without the usual ethics protocols applied to human data. A related issue is data ownership. Who owns the data, particularly on Indigenous traditional territories on which environmental data is often gathered without consent? Indigenous data sovereignty is now being asserted around the world, shifting our understanding of environmental data rights, privacy, and ownership.
Applications of bioacoustics are also of concern. Conservationists laud the innovative use of bioacoustics to measure ecosystem health, monitor endangered species, and even protect national parks by detecting and fending off poachers. But others warn of the dangers of eco-surveillance capitalism, which poses a risk of human bycatch; as scientists build global listening networks for environmental purposes, will they also eavesdrop on human conversations? And some innovators are attempting to use our newfound knowledge to command and even domesticate previously wild species. Will bioacoustics create a new frontier for the accumulation of nature, mediated by digital technologies?
As humanity is awakening to the resonant mysteries of non-human sound, we are simultaneously becoming aware of an unsuspected threat. The catastrophic impacts of noise pollution on living organisms—particularly in aquatic environments—has been demonstrated by a spate of studies in recent years. Whales and dolphins, birds and bats suffer from mechanical and industrial noise, which reduces their ability to feed and navigate, communicate and mate. Even species like shrimp and seagrass are affected. This should be unsurprising, given the well-recognized impacts of noise on human health: increased stress, cardiovascular risks, even dementia. The growing cacophony of noise pollution is emerging as our era’s greatest unrecognized menace to human and environmental health. The silver lining is that it is easy to reduce noise pollution, and the impacts are positive, significant, and immediate.
What will digital bioacoustics teach us? Its implications are both practical and philosophical. We can use digital bioacoustics to protect non-human species from harm, but we can also connect with non-humans in new ways. As we begin listening anew across the Tree of Life, we are realizing that our fellow non-humans have more to say to us than we have previously suspected. This challenges deep-rooted assumptions about complex communication in non-humans, our definition of language, and the relationship between humans and our fellow Earthlings.
Karen Bakker is a professor at the University of British Columbia, and earned her PhD from the University of Oxford as a Rhodes Scholar. She is the recipient of numerous awards, including an Annenberg Fellowship (Stanford University), a Guggenheim Fellowship, and a Radcliffe Fellowship (Harvard University). An avid gardener and the mother of two daughters, she lives in Vancouver.