Leslie Valiant on The Importance of Being Educable

Leslie Valiant on The Importance of Being Educable

Scroll to Article Content

We are at a crossroads in history. If we hope to share our planet successfully with one another and the AI systems we are creating, we must reflect on who we are, how we got here, and where we are heading. The Importance of Being Educable puts forward a provocative new exploration of the extraordinary facility of humans to absorb and apply knowledge. The remarkable “educability” of the human brain can be understood as an information processing ability. It sets our species apart, enables the civilization we have, and gives us the power and potential to set our planet on a steady course. Yet it comes hand in hand with an insidious weakness. While we can readily absorb entire systems of thought about worlds of experience beyond our own, we struggle to judge correctly what information we should trust.


Are there any simple takeaways from the book?

LV: There are. One is that after thousands of years of studying ourselves as humans we have made less progress than we might like to think, but computer science offers some hope for the future. One clue on the computer science claim is offered by the Large Language Models, which, among the artifacts we have made, are the ones with most human-like behavior. They incorporate little in their construction from sophisticated knowledge of psychology, linguistics, neuroscience or the humanities. These fields may have provided some distant inspiration, but Large Language Models are based more directly on the successful scaling up of the simple idea of generalizing from examples as formulated in the computer science of machine learning.

Is there a better way forward suggested in the book for understanding ourselves?

LV: Yes. We need a broader theory of our cognition, beyond currently implemented machine learning. Some existing concepts are not so helpful. There is general appreciation within the psychology community that the concept of “intelligence” is not one that is well-defined. This should be a call to action since we use the term so broadly—we have intelligence tests, pursue artificial intelligence, etc. In my book the concept of “educability” is proposed as a better defined concept that may turn out to be more useful.

What would be the tangible benefits of having a better theory of human cognition?

LV: My formulation of “educability” is that of a capability for soaking up knowledge so as to be able to apply it, very reminiscent of what we colloquially mean by education. A clear longer-term goal therefore is to contribute to improving education. The idea that education needs a better science base from which the process of education can be better understood has been long advocated. I think computer science notions, such as the one in the book, offer fresh opportunities.

A second benefit would be that we could incorporate it in technology. Turing’s theory of computation is the outstanding paradigm. Before his time, “computation” was a psychological process done mostly by humans. His theoretical formulation of universal computing gave us our digitally base civilization. When forty years ago I wrote down my theory of learning, in the first paragraph I gave this Turing example as motivation. The challenge now is to take the next step to expand our formulation of cognition beyond currently practiced machine learning. If we are successful better technology will again follow.

Are there any action items for humans suggested by the book?

LV: Yes. One important distinction, I think, is that when you are learning in an environment, the environment can be helpful, neutral, or adversarial to your quest to learn. Adversarial can mean that the environment misleads you on particular facts by deliberately giving false information, but it can be much worse, it may try to wreck your learning process altogether. One such widely discussed phenomenon is gaslighting, where the information supplied is so consistently false that it achieves the intent to ruin your feeling of having any competence in anything at all. I believe that our learning mechanisms evolved in a world that was essentially neutral, and that therefore we are badly prepared for the current world where we are bombarded with adversarially prepared propaganda. In a neutral world it is appropriate that if some occurrence repeats ten times, we should expect it to repeat in the future. But in an adversarial world, if a propagandist tells us something ten times, that something may well be false.

In a free society the prospects of controlling such propaganda at source have their limits. I think we have to make a long-term investment through education to harden the population from an early age to such adversarial propaganda. The goal would be safety against propaganda—both traditional and AI-enabled. This is a proposed action item that becomes clear, and perhaps even urgent, once the neutral/adversarial distinction in learning is pointed out.


Leslie Valiant is the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Recipient of the Turing Award and the Nevanlinna Prize for his foundational contributions to machine learning and computer science, he is the author of Probably Approximately Correct and Circuits of the Mind.