The physicist Richard Feynman once said, “What I cannot create, I do not understand.” One could add, “What I cannot accurately simulate, I do not understand.” We can be confident we understand how airplanes fly and how they behave during engine failure or turbulence because a flight simulator can accurately predict what would happen. Indeed, pilots are routinely trained both for normal and unexpected situations using flight simulators, providing them the experience that they might never encounter using real aircraft.
What are the prospects of simulating ourselves in a computer? At first sight, this sounds more science fiction than fact. However, much progress has been made; and this book takes us on a grand tour of the first steps towards the creation of our own digital twin and the challenges in accomplishing this goal, which is a fascinating journey in itself.
One of the key themes of the book is an aspiration to see as much theory used in biology as in physics. When I left physics in 1976 immediately after graduate school to retrain all over again as a biologist, I immediately noticed a difference. Physics as well as chemistry had highly advanced theory that helped guide and even predict experiments and behaviour. So, from theoretical considerations, it was possible to invent transistors and lasers and synthesize entirely new compounds. At the extreme, it even led to the construction of a multibillion-dollar accelerator to look for the theoretically predicted Higgs boson. It is far from perfect. For example, there is no good theory for high-temperature superconductivity, and we cannot predict the detailed superconducting behaviour of a mixture of metals.
By contrast, certainly in the 1970s, biology seemed largely observational and empirical. It is true that it did have one encompassing theory—that of natural selection acting as a driving force for the evolution of all life. While this theory had great explanatory power, it did not have detailed predictive power. Biology also had what I would call local theories or models: an understanding of how an impulse travels along a nerve, how various biological motors work, or how you would respond to a shot of adrenaline. But what it lacked was an overarching ability to predict from a set of initial conditions how a system, even a basic unit like a cell, would behave over time under an arbitrary set of conditions.
Part of the problem is that the number of factors involved in sustaining life is enormous. At some basic level, there is our genome, whose sequence was announced at the beginning of this century with much fanfare and the promise of ushering in a new age of biology. The genome consists of tens of thousands of genes, all expressed in varying degrees and at varying times. Moreover, the expression of genes is modulated by chemical tags added to the DNA, or proteins associated with it, that persist through cell division—the subject of epigenetics. Finally, the expression of genes and their function in the cell is the result of their interaction with each other and the environment to form a giant network. So, any early notions of being able to predict even just gene-based outcomes by knowing someone’s sequence turned out to be far-fetched. If this is the case for just a single cell, one can imagine the levels of complexity in trying to predict outcomes theoretically for an organ, let alone an entire human being.
Over the last 20 years, the biological revolution of the twentieth century has continued, but this time augmented by a revolution in computing and data. We now have vast amounts of data. Instead of a single human genome sequence, we have hundreds of thousands, as well as vast genome sequences of an enormous number of species. We have transcriptome maps that tell us which genes are expressed (the process by which the information encoded in a gene is put to work) in which cells and when, we have interactome maps that map the interaction between the thousands of genes. Along with this we have large-scale data on human physiology and disease. We have data on the personal health characteristics of millions of individuals. All of these data are simply beyond the ability of any person or even a group to analyse and interpret. But at the same time, data science has also exploded. Computational techniques to link large data sets and make sense of them are advancing all the time.
Will these advances, in conjunction with a robust development of theory, allow us to eventually digitally simulate a life-form? This book argues that the confluence of these developments is allowing us to tackle the ambitious goal of simulating virtually every organ and every process in a human. When that is done, one can begin to ask questions: How will this person respond to a particular treatment? At what point is someone likely to develop a debilitating illness? Could they prevent it by taking action early on and, if so, what action is likely to work? How would one person’s immune system respond to an infection compared to someone else’s?
The book describes progress in virtually every area of computational biomedicine, from molecules and cells to organs and the individual. Reading it, I get the impression that there are areas where a digital simulation—a virtual copy of the real thing—is almost here. Other goals seem plausible, and there are yet others that seem a lot like science fiction. Yes, in principle it might be possible to achieve them eventually, but the practical difficulties seem enormous and currently insurmountable. Scientists may well debate which parts are a fantasy and which parts are a glimpse into our future. Coveney and Highfield themselves draw the line at digital consciousness.
Those of us on the more sceptical end should remember that there are two almost universal characteristics of new technologies. The first is that, in the initial stages, they do not seem to work well and their applications seem very narrow and limited. In what seems to be an abrupt transformation, but is really the result of several decades of work, they suddenly become ubiquitous and are taken for granted. We saw that with the internet, which for the first couple of decades was the preserve of a few scientists in academia and government labs before it exploded and dominated our lives. A second characteristic, pointed out by Roy Amara, is that when it comes to predicting the effect and power of a new technology, we tend to overestimate the short term and underestimate the long term. This often leads to great initial hype followed by the inevitable disappointment followed by eventual success. For example, the hype around artificial intelligence in the 1970s and ’80s proved quite disappointing for decades, but today the field is making enormous strides. Perhaps we are seeing the same trajectory with driverless cars.
Whether the same will be true for the creation of virtual humans remains a matter of debate, but in this book Coveney and Highfield offer us a fascinating account of the diverse efforts around the world that are now under way with that extraordinary goal in mind.
This essay is an excerpt from the foreword of Virtual You: How Building Your Digital Twin Will Revolutionize Medicine and Change Your Life by Peter Coveney and Roger Highfield.
Venki Ramakrishnan shared the Nobel Prize in 2009 for his studies of the ribosome, the molecular machine that turns genes into flesh and blood.