This file is also available in Adobe Acrobat PDF format
REPRESENTATION, PUBLIC OPINION, AND THE VOICE OF THE PEOPLE
THROUGHOUT his eight years as president, the recurrent image of Bill Clinton was that of a weather vane, constantly shifting to and fro in response to the fickle political winds of opinion polls. "Clinton's legacy is in many ways a story about polls," writes John F. Harris of the Washington Post, capturing a prominent view of Clinton. "It is true that no previous president read public opinion surveys with the same hypnotic intensity. And no predecessor has integrated his pollster so thoroughly into the policymaking operation of his White House" (Harris 2000). Some of Clinton's own staffers have taken the view that he was too reliant on polls. As Harris notes, former Clinton aid George Stephanopoulos and former Labor Secretary Robert Reich both wrote memoirs that recalled bitterly Clinton's reliance on consultants and polling.
Given the derisive tone of such portrayals, one might think that George W. Bush, Clinton's Republican successor in the White House, would carefully avoid such a shallow appearance. Not surprisingly, during the 2000 campaign, Bush repeatedly claimed "we take stands without having to run polls and focus groups to tell us where we stand" (Carney and Dickerson 2000). As president, Bush publicly sought to distance himself from a Clintonesque reliance on polls. During a press conference concerning tax reform in May 2001, for example, Bush argued, "I'm not really that concerned about standing in polls. I am doing what I think is the right thing to do. And the right thing to do is to have proposed a tax relief package that is an integral part of a fiscal policy that makes sense."1
Although Bush may have claimed not to care about polls, he funded a polling apparatus comparable in scope to that of the Clinton administration. In the 2000 campaign, the Bush administration was kept abreast of public opinion through polls and focus groups paid for by the Republican National Committee and conducted by Bush's campaign pollster, Matthew Dowd (Hall 2001). Through the first two years of his presidency, Bush continued to rely on polls--especially those conducted by his principal pollster, Jan van Lohuizen. All told, in 2001, Bush spent almost a million dollars on operations to gauge the public reaction to Social Security reform and his energy plan. Though this figure is approximately half the amount Clinton spent during his first year in office, it represents a substantial sum of money (Green 2002).
Such developments should not surprise those who follow American politics. Though it may be an open question as to whether politicians pander to opinion polls or use those polls to craft support for their preferred policies (Jacobs and Shapiro 2000), what is not in dispute is that polls lay at the center of American politics. Polls provide the most obvious and ongoing link between citizens and their leaders. Regardless of one's views of the polling enterprise, the fact remains that surveys have become a critical mechanism for the communication of information between the mass public and political elites. In addition, unlike other forms of political participation, polls do not require citizens to make a significant investment of time or resources to make their voice heard. Thus polls have the potential to ensure that all citizens are heard by politicians and policymakers. Understanding the information carried in polls--how well polls measure the underlying preferences and perspectives of individuals in society--is therefore a critical political question.
In this book, I cast a critical eye on public opinion polls in the United States. I demonstrate that opinion polls may fail to equally represent the preferences of all Americans with regard to some of the most important issues of our time: racial policy, the scope of the social welfare state, and attitudes toward war. This misrepresentation arises from what I term "exclusion bias"--the exclusion of the preferences of the sometimes sizable portion of the public who say they "don't know" where they stand on the issues of the day, due either to an absence of those resources that would allow them to form a coherent opinion or to a fear of expressing sentiments that might paint them in an unfavorable light. The political voice of these abstainers is, in certain cases, systematically different from the voice of those who respond to poll questions. The existence of these "silent voices" must lead us to question whether polls truly are representative of the underlying sentiments of the entire mass public. Thus, to understand public opinion in America, we must carefully consider the political interests and values of the politically silent. In this way, we can see what information we capture with opinion polls, and--more importantly--what information we leave behind.
Public Opinion, Political Participation, and the Voice of the People
Given that democracy cannot function without some form of mass input into the political process, how should we best gauge public opinion? To make the compromises and tradeoffs essential to the functioning of a political system, we need information about both the direction and the intensity of the public will. When considering the place of public opinion in the government process, then, it is important to strike a balance that enables the broad expression of political views and recognizes that some preferences are more intensely held than others (see, for example, Dahl 1956).
Direct political participation facilitates the transmission of intense preferences and perspectives to political elites. If citizens care enough about a particular issue, they may convey their particular desires to the government in a variety of ways. They may contact government officials, donate money to political causes, or become involved in electoral campaigns (Bauer, Pool, and Dexter 1963; Herbst 1993, 1998; Kingdon 1973; Verba, Schlozman, and Brady 1995). Because the civic sphere in the United States is relatively permeable and citizens are free to voice matters of concern to them, traditional forms of participation do a fair job of ensuring that intense interests are heard in the political process.2
Though participation may represent adequately some intense interests, it does a poor job of guaranteeing political equality. Political activists, after all, do not come to the political world by chance. Instead, they are drawn disproportionately from those groups more advantaged in the resources that aid participation, such as education and disposable income. Activists therefore differ in politically consequential ways from those who do not engage in politics. As Verba, Schlozman, and Brady conclude, "the voice of the people as expressed through participation comes from a limited and unrepresentative set of citizens" (1995, 2; see also Schattschneider 1960; Verba and Nie 1972). Some interests might be muted, not because citizens lack concerns relevant to a particular controversy, but instead because they have difficulty making themselves heard on the political stage. For example, those citizens who benefit from direct government aid programs, such as welfare, by and large lack the economic and civic resources necessary to contribute to political candidates or effectively petition their representatives on their own behalf (see Verba, Schlozman, and Brady 1995).
But many observers believe that where traditional forms of participation fail, opinion polls or surveys may succeed. Over the course of the twentieth century, polls have emerged as an important tool to measure the public will. Surveys have become a major component of political reporting, and--as the discussion of the polling operations of Bush and Clinton above demonstrates--politicians are willing to expend large sums to conduct surveys (Brehm 1993; Frankovic 1992; Ladd and Benson 1992; Warren 2001).3 But polls are not merely pervasive in modern politics. More important for present purposes, they seem to ensure that the full spectrum of political interests in the political system is heard. The reason is straightforward: Surveys, if executed correctly, are conducted through random sampling. Under this system, every citizen has an equal chance of being selected for a poll, regardless of his or her personal circumstance. Furthermore, by underwriting the direct costs of participation, opinion polls ensure that disparities in politically relevant resources will not discourage the expression of politically relevant values and interests. Survey organizations, after all, contact respondents, not the other way around. In short, polls hold special appeal as a form of gauging the public's will because they appear to be free of the resource-based bias that plagues traditional forms of participation.
This conception of opinion polls as broadly representative of public sentiment has long pervaded academic and popular discussions of polls. Polling pioneer George Gallup advanced the virtues of surveys as a means for political elites to assess the collective "mandate of the people." If properly designed and conducted, Gallup argued, polls would act as a "sampling referendum" and provide a more accurate measure of popular opinion than more traditional methods, such as reading mail from constituents and attending to newspapers (Gallup and Rae 1940).4 More recently, in his presidential address at the 1996 American Political Science Association Meeting, Sidney Verba argued, "sample surveys provide the closest approximation to an unbiased representation of the public because participation in a survey requires no resources and because surveys eliminate the bias inherent in the fact that participants in politics are self-selected . . . Surveys produce just what democracy is supposed to produce--equal representation of all citizens" (1996, 3; see also Geer 1996).
Even critics of the survey research enterprise have adopted the populist conception embodied in the work of Gallup and Verba. Polling critic Susan Herbst writes, "Modern survey techniques enable the pollster to draw a representative sample of Americans The public expression techniques of the past--such as coffeehouses, salons, and petitions--did not allow for such comprehensive representation of the entire public's views" (1993, 166). Benjamin Ginsberg offers further criticisms of polls precisely because they are too inclusive of popular sentiment. Surveys, he argues in his 1986 book, The Captive Public, dilute the intensity of those political actors who choose to make their voices heard on the public stage. As Ginsberg writes, "polls underwrite or subsidize the costs of eliciting, organizing, and publicly expressing opinion As a result, the beliefs of those who care relatively little or even hardly at all are as likely to be publicized as the opinions of those who care a great deal about the matter in question" (1986, 64).
What Gallup and Verba see as a virtue, Ginsberg labels a vice, but these differences are largely normative.5 At an empirical level, there is a general agreement on one point among academics and professionals, be they proponents or opponents of the polling enterprise: opinion polls are broadly representative of popular sentiment. Whatever their limitations, surveys appear to provide a requisite egalitarian complement to traditional forms of political participation. They seem to balance the biased voice of traditional forms of participation by enabling the broad expression of political interests and thereby allowing all citizens to communicate their preferences and perspectives to those responsible for legislating and implementing public policy. Polls can provide--at least in theory--a clear picture of the public's will, which can then be used both to aid political elites in the decision-making process and to gauge the adequacy of political representation in the course of policy formation. Through opinion polls, the voice of the people may be heard.
Or so the story goes. In the pages that follow, I make the case that this view of polling is incorrect. The very process of collecting information concerning the public's preferences through surveys may exclude particular interests from the political stage. In the course of the survey, the interaction between the individual citizen and the larger social and political context of American society may foster bias in polls. Thus, the first principle of opinion polls--that they allow all citizens to make their political voice heard, regardless of their interests and values--may be faulty.
The Argument in Brief
For polls to meet their egalitarian promise, a set of fairly strong individual-level assumptions about the way in which citizens approach being interviewed by pollsters must be met.6 We must first assume that all individuals will, given the chance, express their views to the interviewer.7 Second, we must assume that the answers that individuals express (or do not express) in opinion surveys are a fair representation of their underlying individual wants, needs, and desires.8 These assumptions, until tested, remain just that--assumptions. And while reasonable, they may or may not hold in practice. During the question-answering process, individual respondents might vanish from our estimates of the public will in a way that undermines the representativeness of opinion polls. Opinion surveys are long affairs consisting of many, sometimes scores of, questions. Some respondents who agree to participate in a survey do not answer particular questions on that survey. Depending on the topic, anywhere from a handful of individuals to large portions of the respondents fail to answer questions. By volunteering a don't know response, segments of the population are effectively removing themselves from measures of the collective will of the public--their political voice is simply not heard. If random, these withdrawals are inconsequential to the flow of information regarding the preferences and perspectives of the mass public. However, if abstention from a survey question follows predictable and systematic patterns across individuals due to the particular social and political context of a given issue, public opinion, as read in polls, may suffer from what I call in this book "exclusion bias."
Exclusion bias--alternatively termed "compositional bias"--is the bias that results in measures of public opinion when particular types of political interests are not represented in opinion polls. Such bias could threaten the representational relationships central to the functioning of a democracy. Representatives seeking information concerning the full spectrum of the public's preferences may, in fact, be missing the sentiments of individuals who speak in a different political voice--the "silent voices." It is critical, then, to ascertain the nature of information lost through individuals' decisions to say they don't know where they stand on given controversies. On some questions, only a handful of respondents choose to abstain. But on other questions--as the chapters that follow demonstrate--a third or more of the population fails to give an answer. Perhaps these don't knows originate from a genuinely uncertain portion of the citizenry. Or perhaps more systematic processes grounded in the social and political context in which the survey takes place are at work. Maybe those left out of opinion polls have different views from those who are included. Without a careful examination of those respondents who abstain from survey questions, we cannot know the preferences of the silent voices.
Outline of Chapters
In the next two chapters, I explore how exclusion bias might arise in measures of public opinion, and how we can measure the extent of such biases. I consider how to conceptualize the opinions on issues of concern to government expressed through public opinion polls. Do these opinions reflect the voice of some but not others? Who are these silent voices? Under what circumstances might polls fail to represent the opinions of segments of the mass public?
I start down this path in chapter 1 by examining how it is that citizens approach the survey interview. If we wish to draw information about public opinion from individual-level survey data, we need to begin by understanding the behavior of those individuals. My analyses therefore begin with a discussion of how individuals come first to form and then to express opinions on political matters. I draw upon theories of attitude formation from recent literature at the intersection of cognitive and social psychology and sociolinguistic models of interpersonal relations. Together, this literature allows me to explore the effects of the cognitive processes of the respondent and social interactions of the survey setting on the opinions individuals express (and do not express) to the interviewer.
I then explore the implications of these theories of the survey process for the construction of public opinion. Political scientists and professional pollsters have typically viewed don't know responses as a nuisance--an expression of "nonattitudes." But don't know responses are important for the consideration of mass input into democratic politics. By volunteering a don't know response, segments of the population are effectively removing themselves from measures of public opinion. These withdrawals are inconsequential for our understanding of the voice of the people to the extent that such decisions are random. But, as I argue in chapter 1, don't know responses typically arise because respondents must pay costs--albeit small--to form and express their views in a survey. If the decision to abstain from survey questions follows predictable and systematic patterns either in the process of opinion formation or in the process of opinion expression, then polls may misrepresent the political sentiments of segments of the public.
In chapter 2, I move to an explicitly political framework and identify those conditions where public opinion polls might foster opinion distortions. Certainly, the gap between survey results and the public's underlying wants, needs, and desires is greater for some issues than it is for others. To identify those areas, I move from an examination of individual political cognition to a discussion of the social and political context in which the survey interview takes place. I examine the effects of the larger context in determining the types of issues where distortions are possible, if not likely. In those cases where the costs of opinion formation and expression are higher for some groups than for others, exclusion bias may develop. To identify such cases, I outline a two-dimensional typology of issue difficulty.
Carmines and Stimson (1980) have argued that particular issues may be "hard" if they require careful consideration of technically difficult choices. "Easy" issues, on the other hand, are those familiar to large portions of the mass public and may be structured by gut responses. These issues are simple and straightforward for even those citizens who pay little attention to the political world. This cognitive aspect of an issue--the cognitive complexity of an issue--defines the first dimension of my typology. The second dimension captures the ease with which respondents report their beliefs to the survey interviewer on particular political topics. The measurement of public sentiment concerning sensitive topics may be hindered by some respondents' attempts to avoid violating the social norms that govern everyday conversations. Put simply, issues may be easy and hard not only in the cognitive sense described by Carmines and Stimson, but also in a social sense. This two-dimensional conception of issue difficulty leads to specific predictions concerning the presence of exclusion bias in measures of public opinion. As issues become more difficult--in both a cognitive and a social sense--for certain groups of individuals, the potential for the exclusion of particular interests increases because those individuals who find it more difficult to answer survey questions will be more likely to abstain from those questions and, as a result, will be lost from collective measures of public opinion.
In the second portion of chapter 2, I lay out the analytic strategy I will use in the rest of the book to measure the nature and direction of exclusion bias in public opinion. The key to this empirical enterprise is to ascribe opinions to people who do not provide answers to survey questions. These opinions can then be compared to the picture of the public will gleaned from surveys, and the differences between the two groups can be assessed.
To begin this process, we need to take a close look at the determinants of individual opinion and see how the factors that influence the direction of response are related to the factors that influence whether the respondents will offer an opinion on a given question. Insofar as these two sets of factors are closely related, the potential for the creation of misrepresentative public opinion is great. For example, as I discuss later, I find that the politically relevant resources--such as education and income--that facilitate the formation of coherent and consistent opinions on social welfare policies also predispose citizens to oppose the maintenance and expansion of welfare state programs. Thus, opinion polls on social welfare policy controversies give disproportionate weight to respondents opposed to expanding the government's role in the economy.
We can then use what we know about the opinions of the question-answerers to characterize the opinions of those individuals who declined to answer the question. In effect, we can estimate what the non-answerers would have said if they were willing and able to give voice to their underlying politically relevant interests and values. This done, we can compare the unexpressed political voice of these citizens to that of those individuals who answer survey questions. We can then determine what opinion polls reveal, and, more importantly, what they conceal. However, this strategy is not in itself complete. Given that I assess the difference between these two groups, in part, by ascribing interests to individuals who opted out of answering survey questions, a healthy degree of skepticism is understandable. Where possible, then, I look for external confirmation of my results.
The next three chapters of the book examine the development of bias in opinion polls within specific policy areas. I examine public opinion across three diverse issue areas--race, social welfare policy, and war. By considering the measurement of public opinion across a range of important political topics, I identify the ways in which those who abstain from survey questions differ from those who answer such questions and trace out the political implications of these differences.
In chapter 3, I explicitly take up the effects of the social dynamics of the survey interview on public opinion by examining the consequences of social desirability concerns. I present analysis of NES data from the early 1990s that show that public opinion polls overstate support for government efforts to integrate schools. My analyses demonstrate that some individuals who harbor anti-integrationist sentiments are likely to mask their socially unacceptable opinions by abstaining from questions concerning those issues. Such actions ensure that a racially conservative base of opinion remains unheard through opinion polls. I also find similar effects in opinion concerning support for government intervention to guarantee jobs. I then validate these findings by demonstrating that the same methods that predict that opinion polls understate opposition to government involvement in race policy also allow me to predict the final results of biracial elections--specifically the 1989 New York City mayoral election--more accurately than the marginals of the pre-election polls taken in the weeks leading to the election. Finally, I use data from the early 1970s to show that the strong social desirability effects found in the 1990s do not characterize opinion in the earlier era. All told, these results suggest that surveys concerning government intervention to ensure racial equality--and more generally questions on racial attitudes--may provide an inaccurate picture of underlying public sentiment, depending on the larger social context in which the survey interview takes place.
In chapter 4, I consider equality of political voice in the realm of social welfare policy. This analysis indicates that expressed public opinion on matters of the overall distribution of economic resources is not entirely representative of the views of the public writ large. Using data over the last thirty years concerning opinions on economic redistribution, I find that public opinion on these matters holds a conservative bias. For individuals who tend to the liberal side of the social welfare policy spectrum, questions of redistribution are cognitively hard. I demonstrate that inequalities in politically relevant resources and the larger political culture surrounding social welfare policy issues disadvantage those groups who would be natural supporters of the welfare state. These groups--the economically disadvantaged and those who support principles of political equality--are less easily able to form coherent and consistent opinions on such policies than those well endowed with politically relevant resources. Those naturally predisposed to champion the maintenance and expansion of welfare state programs are, as a result, less likely to articulate opinions on surveys. In short, the voice of those who abstain from the social welfare policy questions is different from the voice of those who respond to such items. This result has serious consequences for the measure of public opinion because the bias mirrors the patterns of inequality found in traditional forms of political participation.
Finally, in chapter 5, I turn to the question of public sentiment concerning war in the context of the Vietnam War. I demonstrate that the imbalance in political rhetoric surrounding Vietnam in the early years of the war disadvantaged those groups who were the inherent opponents of the war. When pro-intervention rhetoric dominated elite discussion of the war in the period from 1964 to 1967, the process of collecting opinion on Vietnam excluded a dovish segment of the population from collective opinion. However, as anti-intervention messages became more common in the public sphere and the Vietnam issue became less cognitively complex for those citizens with anti-intervention views, this bias receded. So while there may indeed have been untapped sentiment concerning Vietnam, President Nixon's claim of a "silent majority" aside, it was a group of citizens that were, on balance, less supportive of the war effort.
All told, thisbook sounds a note of warning to those who seek to assess the voice of the people through surveys. The weaknesses of opinion polls are well known. As the critics of polls note, surveys do not do a very good job of conveying detailed information concerning the underlying intensity of responses. But opinion polls are typically hailed--by supporters and critics alike--as the most egalitarian of means of collecting the public's views on the issues of the day. In this book, I argue that while opinion surveys may indeed be more egalitarian than other forms of participation, polls too may suffer from systematic inequalities of voice.
Opinion polls, then, like any measure of public preference, are imperfect. This book should not, however, be viewed as an indictment of the survey enterprise, or of attempts to collect individuals' opinions on the issues of the day. Instead, it is a call to understand and account for biases that arise in the collection of public opinion. To measure the public's will more accurately, we should pay closer attention to just what it is that we are measuring--the interaction between individuals' underlying sentiment and the social and political context of an issue. By integrating our knowledge of the political and social forces at work in the larger world into an understanding of the individual-level question-answering process, opinion polls may better serve the egalitarian purpose envisioned by scholars, pollsters, and political elites.
Return to Book Description
File created: 8/7/2007
Questions and comments to: firstname.lastname@example.org
Princeton University Press