Book Search:  

 

 
Google full text of our books:

bookjacket

Eight Preposterous Propositions:
From the Genetics of Homosexuality to the Benefits of Global Warming
Robert Ehrlich

Book Description | Reviews | Table of Contents

COPYRIGHT NOTICE: Published by Princeton University Press and copyrighted, © 2003, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers. Follow links for Class Use and other Permissions. For more information, send e-mail to permissions@press.princeton.edu

This file is also available in Adobe Acrobat PDF format

Chapter 1

INTRODUCTION

WE LIVE IN AN AGE when the boundaries between science and science fiction are becoming increasingly blurred. It sometimes seems that nothing is too strange to be true. How can we decide which of the outlandish ideas that are constantly bombarding us by might be true and which are complete nonsense? It's particularly tough to come to an informed judgment regarding ideas that have a scientific component where there are also huge political and economic stakes involved, such as global warming. In such a case, it's too easy to go by our political preferences or by the opinions of the last expert we saw on TV. This book will help you look at the "evidence" critically and judge for yourself. It is similar to Nine Crazy Ideas in Science--A Few Might Even Be True. In this book, though, we'll be looking at topics that are a bit less "sciency" and closer to everyday life. But we will still be using the tools of science to analyze each topic.

Scientists try to take as little for granted as possible. They are constantly asking how we know whether something is true, and whether alternative explanations are possible. Scientists also seem to be more comfortable with uncertainty than other people. Many scientists may believe that a given theory--say, that of human-caused global warming--is likely to be true, but be far from certain about it. Such uncertainty is disquieting to most people who want to have as clear a picture as possible of our planet's future. Policy-makers, in particular, want informed scientific guidance before undertaking expensive solutions to problems of uncertain severity.

People facing large uncertainties and potentially large risks may be forced to rely on their gut instincts. That's OK in one sense. Making judgments based on our emotions is understandable if our large uncertainties reflect those that are inherent in an issue. But if the uncertainties are simply a matter of our own laziness in not looking into an issue deeply enough, then relying on our instincts is very unfortunate. Public policy on important issues may be left by default to those competing interests that are best able to manipulate public opinion.

Scientists approach a controversial theory from the opposite perspective of lawyers. Lawyers are paid to make the best case possible for their client, a person they may believe to be guilty. They want to know all the evidence on the other side, but only in order to refute it better and make their case stronger. If they can't refute the evidence, they'll try any trick in the book to make the jury discount it. Scientists, on the other hand, are more like jury members. While science does include its share of cheaters, self-deceivers, and self-promoters, scientists should judge a theory by fairly weighing all the evidence on each side.

Like juries, people who approach topics from a scientific perspective also may require different levels of certainty ("beyond a reasonable doubt" or the less strict "preponderance of the evidence"), depending on what the theory involves. I suspect that on global warming, for example, the preponderance of the evidence might be enough certainty for most people. But, unlike a jury, which has to make up its mind once and for all, scientists must continue to remain open-minded to contrary evidence even after they have accepted a theory as being probably true.

Why did I choose this particular set of eight topics? As in Nine Crazy Ideas, I wanted to explore some topics that are controversial, and I hope that I've included enough controversial ones to offend just about everyone. I also wanted topics that have important public policy implications in the real world and have ample scientific data on each side of the issue. Although initially I was reluctant to tackle any topics involving paranormal or psychic phenomena, I wound up including one (the possibility that objects can be influenced by thought alone) because a great deal of experimental data actually exists on this matter.

Although in many cases I found the evidence for an idea not to be credible, this is not a debunking book. I tried to look at each idea as objectively as possible. Even though I did have some initial biases with most of the eight ideas, I also found myself changing my opinion on them--in some cases two or three times! As a general rule, I believe that in approaching a new idea it is very important not to make up your mind too quickly.

Is there a formula to apply to a given controversial idea to see if it might be true? No, but you should ask yourself questions: How do the proponents of a new idea claim to know that it's true? How might the data be interpreted differently? How can the theory be tested? Let me give you an example from a topic that I almost decided to write about, but didn't. Many people believe that violence in the media causes children who view it to become violent. There is no question that a connection exists at some level between media and real-world violence, given the copycat phenomenon. Even some terrorists are said to have gotten ideas from viewing action-adventure movies! But here we're considering the more general question of whether media violence causes children exposed to it to become violent later in life. Not having looked at the research, I am agnostic on the question of whether this view is correct. Most social scientists believe it is true, but there are some who don't.1 What are some of the kinds of questions you might ask yourself to decide whether the theory that media violence causes real life violence is true? Why don't you actually stop reading for a few minutes and make a list. You could then later compare your list to mine at the end of this chapter.

In evaluating a controversial idea it is useful to consult a wide range of sources. The Internet makes this quite easy, but it also makes it easy to wind up at web sites of pseudo-experts. One very helpful summary of methods to check that the information on a web site is reliable has been prepared by Jim Kapoun, an instruction librarian at Southwest State University--see his site at www.ala.org/acrl/undwebev.html. Kapoun's checklist was prepared to help college students evaluate the reliability of any web site, but its methods are appropriate for anyone. You can hone your skills at recognizing the real and pseudo-experts on any topic by answering the specific questions that Kapoun raises about any web site. Here are some additional criteria you can use. More often than not, the pseudo-expert's sites share these features. (1) Pseudo-experts are usually certain of everything they claim, (2) cite their own research frequently, (3) try to impress you with fancy titles, (4) describe the suppression of their ideas by the establishment, and (5) have a clear agenda and maybe even a financial incentive. But aside from the pitfalls of being swayed by fancy-looking web sites that actually contain nonsense, the web is a fantastic tool for gaining valid information on virtually any topic.

It may strike you that I am being hypocritical in stressing the importance of being wary of pseudo-experts on any given topic. After all, am I--a physics professor--not just a pseudo-expert on many of the topics I'm writing about in this book? For that matter, how can you or I be expected to find the truth about controversial subjects that lie way outside our fields of expertise, when even the real experts disagree? I think it is actually possible for outsiders to do a competent job of analyzing evidence in many areas--but only if they have done their homework.

That homework involves learning a bit of statistics (the favorite tool of people who want to distort the truth) and enough of the basic science and vocabulary in a field to clearly understand the basis for what is being claimed. As I've already said, just follow the evidence, ask how each claim is demonstrated to be true, and don't make up your mind too quickly. If you decide too quickly, you could fall into the trap of filtering all evidence through your preconceived view and not giving contrary evidence sufficient weight--which we all do far too frequently. That trap is the very essence of prejudice.

The eight topics discussed in this book have varying degrees of credibility. In each case, after discussing the evidence on each side I give the idea a rating at the end of the chapter, according to how well the case for it has been demonstrated. In a previous book I used a "cuckoo" rating scale, which went from zero to four cuckoos, based on how crazy I considered an idea. Here I've abandoned the cuckoo scale in favor of a "flakiness" scale. An example may help clarify the difference between craziness and flakiness. Many of the ideas of modern physics are completely crazy, especially some of the paradoxical ideas of quantum physics. In fact, when one of the pioneers in this field, Wolfgang Pauli, made a presentation on one of his new crazy theories, he was told by the great physicist Niels Bohr: "We are all agreed that your theory is crazy. The question, which divides us, is whether it is crazy enough to have a chance of being correct. My own feeling is that it is not crazy enough."

Bohr understood that great revolutionary advances in science always will sound crazy at the beginning, partly because they challenge conventional wisdom, and partly because they will initially be presented in a confusing incomplete form. The tidying-up done after the fact makes great scientific advances seem much more logical, but it obscures the important roles of intuition, imagination, chance, and even aesthetics in making scientific discoveries. The true scientific method is not quite as neat and logical as we sometimes portray it to be.

Although many of the ideas of quantum physics are still crazy after all these years (since the 1930s), they are certainly not "flaky," i.e., lacking in empirical evidence or internal consistency. Conversely, new theories may be flaky, but not sound crazy at all, if they fit into your view of how the world works. My new rating scheme based on flakiness goes from zero flakes, meaning a reasonable degree of confidence that the idea is true based on the evidence, to four flakes, meaning no credible evidence for the idea. A summary of my ratings for each idea can be found in the epilogue. Obviously, these ratings are subjective and influenced by my own biases, but I will reveal those biases if I have any, so that you can decide for yourself how honestly I've dealt with each idea.

Questions to Ask in Judging Whether A Really Causes B

This section illustrates some questions you might ask to decide whether a theory claiming that A causes B is well supported by the evidence. For specificity, we'll assume that A is media violence to which children are exposed and B is real-life violence that they later commit, but the same questions could serve as a template for just about any other topic. I suspect you may be able to come up with a number of additional questions to those listed.

  1. How exactly do the studies looking for a connection between A and B define and measure A? How do they define and measure B? Do their definitions seem reasonable?
  2. Have studies shown there is a correlation between A and B? How strong is the correlation?2
  3. If only some of the studies on A and B show there is a correlation between them, which studies seem better designed? Which studies have the greater statistical significance? (Just because a study finds a negative result, it doesn't mean A and B Are unrelated--for example, the study sample may have been very small.
  4. If some studies show that there is a strong correlation between A and B, can that correlation be explained in ways other than A being the cause of B?3
  5. How do the studies exploring the connection between A and B control for confounding variables, such as other possible causes of B? What are those other causes, and do the studies investigate how important they are compared to A?
  6. Do the studies compare situations in which A is present with "control groups" in which A is not present? If so, how can we be sure the control groups are representative?
  7. Do the studies show that the relation between A and B is a continuous one, that is, the more A is present, the more B is found later?
  8. If A really causes B, can we explain why in some places A is common but B is not? (Japan, for example, has a great deal of media violence, but little real-life violence.)
  9. If A really causes B, can we explain why A becomes at some times more prevalent while B becomes less prevalent? (Media violence has probably been increasing in the United States over time, but real-life violent crime has been decreasing for a number of years, at least until 2001.)
  10. Do the individuals doing studies on the connection between A and B appear to have strong ideological biases?
  11. Who is funding a given researcher's study?4

Statistical Significance

This is the first of many boxes throughout the book. Boxes contain information that can be skipped without disturbing the flow. Sometimes the information in the box may be a little more technical than the text. Other times it may provide definitions or examples of ideas discussed in the chapter. This box concerns the important subject of statistical significance. The statistical significance of any study is judged by the likelihood that the results could have been due to chance. Researchers in different fields have different standards regarding how small this likelihood should be for a result to be considered "significant." In many fields the criterion is "95 percent confidence," meaning that there is a 5 percent probability (often written p 3 0.05) that the result could be due to chance, and therefore bogus. Clearly, the smaller the probability or p value, the higher the level of statistical significance and the more confidence we can have in a study's results. The simplest way that a study can achieve a higher level of statistical significance is to amass more data, which is sometimes costly or not feasible.

Return to Book Description

File created: 8/7/2007

Questions and comments to: webmaster@pupress.princeton.edu
Princeton University Press

New Book E-mails
New In Print
PUP Blog
Videos/Audios
Princeton APPS
Sample Chapters
Subjects
Series
Catalogs
Princeton Legacy Library
Textbooks
Media/Reviewers
Class Use
Rights/Permissions
Ordering
Recent Awards
Princeton Shorts
Freshman Reading
PUP Europe
About Us
Contact Us
Links
F.A.Q.
PUP Home


Bookmark and Share