Book Search:  

 

 
Google full text of our books:

bookjacket

Credit and Blame
Charles Tilly

Book Description | Reviews | Table of Contents

COPYRIGHT NOTICE: Published by Princeton University Press and copyrighted, © 2008, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers. Follow links for Class Use and other Permissions. For more information, send e-mail to permissions@press.princeton.edu

This file is also available in Adobe Acrobat PDF format

CHAPTER 1

CREDIT, BLAME, AND SOCIAL LIFE

In Dostoevsky’s chilling novel Crime and Punishment, poverty-stricken and ailing ex-student Rodion Romanovich Raskolnikov figures first as antihero, then finally as hero. At the book’s very start, Raskolnikov descends the stairs from his shabby room to the St. Petersburg street. As he reflects on the crime he is contemplating, he mutters to himself:

Hm . . . yes . . . a man holds the fate of the world in his two hands, and yet, simply because he is afraid, he just lets things drift—that is a truism . . . I wonder what men are most afraid of . . . Any new departure, and especially a new word—that is what they fear most of all . . . But I am talking too much. That’s why I don’t act, because I am always talking. Or perhaps I talk so much just because I can’t act.1

Raskolnikov soon summons up the courage—or the frenzy—to commit a viciously violent act. With a stolen axe, he murders the aged pawnbroker Alyona Ivanovna, cuts a greasy purse from around the old woman’s neck, fills his pockets with pawned objects from a chest underneath her bed, misses thousands of rubles in a nearby chest of drawers, and slaughters the old woman’s long-suffering sister Lizaveta Ivanovna when Lizaveta arrives unexpectedly.

Raskolnikov then flees in panic down the stairs, almost gets caught on the way out, rushes to his miserable room, lies down feverish and exhausted, gets up to go out with his loot, hides it under a big stone in a faraway courtyard, and never retrieves his ill-gotten gains from their hiding place. Most of the novel revolves around changes in relations between Raskolnikov and other people as the imperial police close their net around him. Before the book’s sentimental finale, Raskolnikov remains incapable of returning the love and admiration friends and family lavish on him despite his surly treatment of them.

With his brutal violence, Raskolnikov hopes confusedly to rise above credit and blame. Yet at his trial witnesses testify to a series of extraordinary charitable and even heroic acts Raskolnikov performed while at the university: supporting the old, ailing father of a dead classmate, rescuing children from a burning room, and more. Those deeds, his voluntary confession, and his debilitating illness win him a short prison sentence of eight years. But Raskolnikov takes no credit for charity and heroism. He identifies himself with heroes like Napoleon. They—he thinks—took their good deeds for granted. They did not hesitate to destroy for the larger good of humanity.

Later, in a Siberian prison for his crime, Raskolnikov reflects again:

My conscience is easy. Of course, an illegal action has been committed; of course, the letter of the law has been broken and blood has been spilt; well, take my head to satisfy the letter of the law . . . and let that be all! Of course, if that were the case, many benefactors of mankind who did not inherit power but seized it for themselves, should have been punished at their very first steps. But the first steps of those men were successfully carried out, and therefore they were right, while mine failed, which means I had no right to permit myself that step.2

Although he is paying the penalty for his crime—hard labor in Siberia—Raskolnikov still refuses to accept the blame.

In his book’s closing scenes, however, Dostoevsky breaks the somber spell. The love of Sonya, the former prostitute who has accompanied Raskolnikov to Siberia, redeems the antihero and starts him toward a new life. At the very end, Dostoevsky paints in the parallel with Christ’s raising Lazarus from the dead. Life, for Raskolnikov, finally entails earning credit and taking blame. Perhaps the world’s Napoleons can escape the binding of human relations, Dostoevsky tells us. The rest of us, Dostoevsky implies, have no choice but to take responsibility for our actions, good or bad.

The lesson cuts both ways: social life involves taking or giving credit and blame, but assignment of credit and blame also involves relations to other people. Nihilists, saints, and utilitarians may imagine worlds in which relations to specific other humans don’t matter so long as accounts come out right with the cosmos, with the gods, or with humanity at large. They are rejecting their own humanity. Raskolnikov’s very effort to escape credit and blame for his actions made the point. In so doing, he was denying his obligations to specific other people, including his mother, his sister, his companion Sonya, and his faithful friend Dmitri Prokofych Razumikhin. For the rest of us ordinary mortals, however, getting relations with specific other people right matters fundamentally.

Following that principle, this book examines how people assign credit and blame for things that go right or wrong. It shows that crediting and blaming are fundamentally social acts. They are doubly social. First, people living with others do not settle for Raskolnikov’s indifference to responsibility. Instead, they insist that when things go right or wrong someone caused them, and should take responsibility for the consequences. They don’t settle for attributing the consequences to luck or fate.

Second, people spend great effort in assigning that responsibility to themselves and others. They complain noisily when other people deny due credit or blame. How people give credit and blame to others (or, for that matter, demand credit for themselves) depends at first on any previously existing relations between the creditor and the credited, the blamer and the blamed. But the very acts of crediting and blaming then define or redefine relations between the parties. This book shows how.

Think of your own daily life. Simply listen to other people’s conversations at lunch, during coffee breaks, or on the bus. We all discuss repeatedly who deserves credit and who is to blame, especially when we don’t think someone (including ourselves) has received just deserts. Even when the people involved think justice has been served, they put serious effort into allocating credit and blame: they write award citations, praise children who do well, pronounce sentences on convicted criminals, cluck their tongues over the latest scandal.

Stories about credit and blame don’t simply spark the passing interest of stories about newly discovered dinosaurs, the latest movie star romance, or antique automobiles seen on the street. They call up empathy. They resonate because they raise issues in our own lives, whether or not we have any direct connection with the people involved. As we will see, in war, peace, politics, economics, and everyday social life, people care greatly about the proper assignment of credit and blame. This book asks how people actually assign credit and blame.

THE SOCIAL LIVES OF CREDIT AND BLAME

The origins of the words “credit” and “blame” clearly communicate their social basis. Credit comes from the Latin credere, to trust or believe. The verb’s past participle creditum meant a thing entrusted to someone else, including a loan. No credit could exist without a relation between the persons giving and receiving credit. According to the Oxford English Dictionary (OED), still current meanings of credit include:

  1. belief, credence, faith, trust
  2. the attribute of being generally believed or credited
  3. favorable estimation, good name, honor, reputation, repute
  4. personal influence based on the confidence of others
  5. honor or commendation bestowed on account of a particular action or personal quality

All except the first (which could consist simply of an individual’s confidence in the earth’s existence) strongly imply relations between givers and receivers of credit.

Blame comes from the Latin blasphemare, to revile or blaspheme. Blame only makes sense when some relation exists between the blamer and the blamed. (People do, of course, sometimes blame fate, their bad luck, evil spirits, the gods, or even themselves for their ill fortune. But even in these extreme cases they are talking about relationships between themselves and the originators of their misfortune.) Again the OED brings out the word’s social basis: “the action of censuring; expression of disapprobation; imputation of demerit on account of a fault or blemish; reproof; censure; reprehension.” A blames B, whether B deserves it or not.

Every act of crediting or blaming, however implicitly, invokes some standard of justice: she got (or failed to get) what she deserved. That standard applies to the object of credit or blame. If you or I assign credit or blame to someone else, furthermore, we necessarily refer to one justification or another.3 Here we detect a difference between credit and blame: credit calls up a justification that associates giver and receiver in the same moral milieu, while blame separates two moral settings from each other. As I engage in “the action of censuring,” I justify my own distinction from the culprit’s world.

Persons who give or receive credit and blame care greatly about justice and its miscarriages. We observers, however, need not worry so much about whether they have acted correctly. This book does not seek general principles of right and wrong action. Here, we ask instead how people assign credit and blame, however appropriately they do so by our personal standards.

We could think of that as primarily a cognitive and emotional question: What mental and visceral stirrings lead an individual to conclude that she or someone else deserves credit or blame for something that has happened? That is how Charles Darwin set up the problem.

Darwin’s third great book, The Descent of Man, focused on cognitive and emotional bases of morality. Darwin laid out four likely causes for the human moral sense: (1) instinctive sympathy of all higher animals for members of their own social groups, (2) memories of past actions and motives that reinforced the satisfaction from making enduring social instincts prevail over short-term desires, (3) reinforcement of the first two by language and communication with other group members, and (4) habit including “obedience to the wishes and judgment of the community.”4

Although Darwin did not single out credit and blame directly, he did conclude that

If any desire or instinct, leading to an action opposed to the good of others, still appears to a man, when recalled to mind, as strong as, or stronger than, his social instinct, he will feel no keen regret at having followed it; but he will be conscious that if his conduct were known to his fellows, it would meet with their disapprobation; and few are so destitute of sympathy as not to feel discomfort when this is realised.5

Psychologists and neuroscientists do not often use the word “instinct” these days. Now that they can simultaneously run experiments and watch the brain at work, however, they are confirming Darwin’s general argument: Sociable moral principles evolved in the higher animals, and depend at least partly on relations to other group members, and on punishment proportionate to offenses.6 To some extent, furthermore, almost all human beings prefer to behave in ways that get approval from their fellows.7 Most of us reject Raskolnikov as our model.

In thinking about credit and blame, we therefore face an interesting choice. We could concentrate on the deep individual psychological processes, inborn or learned, that go on as people assign credit and blame. Or we could focus on how people deal with each other as they assign credit or blame. This book takes the second tack. While giving due respect to built-in moral propensities, it emphasizes a fascinating trio of related questions: What social processes produce the singling out of this individual or that as worthy of credit or blame? Having singled out someone as worthy of credit or blame, what do people do about it? How does the assignment of credit and blame affect the lives of the people involved?

All of us have enough personal experience with credit and blame to check general explanations against our own observations. My only firing from a job, for example, took place in a Chicago suburb, Elmhurst, during World War II. As a young teenager, I earned precious pocket money in a neighborhood grocery store after school and on weekends by unloading incoming shipments, stocking shelves, sweeping floors, and helping with home deliveries.

One day a bigger, stronger stock boy and I were unpacking cartons of breakfast cereal and stacking them. We (selfserving memory says “he”) invented the labor-saving method of pulling cereal boxes from the carton and throwing them to each other across about six feet of distance, shouting and laughing as we tossed boxes of Wheaties and corn flakes. The store’s co-owner walked into the middle of our jamboree. He fired me, but not my partner, who got off with a warning. Although the boss probably had other reasons for getting rid of me, I felt an acute sense of injustice, not to mention the blame I faced when I reported the news to my parents, who were barely scraping by on my father’s uncertain income. I haven’t written this book to avenge that wrong, forgotten for more than sixty years. But it illustrates the personal impact of blame.

Credit and blame operate on a much larger and weightier scale than a teenager’s work history. During the 1980s, Latin American regimes that had thrown off dictatorships began establishing truth commissions that inquired into the abduction and killing of those dictatorships’ enemies as well as the seizure and adoption of the enemies’ children. The practice of truth commissions then generalized, most famously in the South African Truth and Reconciliation Commission presided over by Archbishop Desmond Tutu. During the twenty years beginning in 1982, more than twenty major truth commissions formed throughout the world. In 2001, a Ford Foundation backed International Center for Transitional Justice started to provide guidance for the setting up of truth commissions.8 Box 1.1 lists the commissions established from 1982 to 2002. In all these cases, either a change of regime, a peace settlement to a civil war, or both allowed current national leaders to look back at the harm done by previous holders of power. They attempted reconciliation through confession. In the case of East Timor,

A regulation issued on July 13, 2001 by the UN Transitional Administration in East Timor established a Commission for Reception, Truth and Reconciliation with a three part mandate: (1) to investigate human rights violations committed there between April 1974 and October 1999, resulting in the death of an estimated 200,000 East Timorese; (2) to facilitate reconciliation and integration of minor criminal offenders who submit confessions, through local “Community Reconciliation Processes”; and (3) to recommend further measures to prevent future abuses and address the needs of victims. After a months-long public nomination and selection process, seven national commissioners were sworn in on January 21, 2002 in Dilli.9
---------------------------------------------------------------------------------------------------------------------------------------------------------------

BOX 1.1
Truth Commissions, 1982–2002 (dates of establishment in parenthesis)

Bolivia (1982): National Commission of Inquiry into Disappearances

Argentina (1983): National Commission on the Disappeared

Uruguay (1985): Commission for the Investigation of the Situation of the Disappeared and Related Events, plus three other commissions, 1985–2000

Zimbabwe (1985): Commission of Inquiry, results still unpublished

Philippines (1986): Presidential Committee on Human Rights

Chad (1990): Crimes and Misappropriations Committed by Ex-President Habré, His Accomplices and/or Accessories

Chile (1991): National Commission for Truth and Reconciliation

Nepal (1991): Commission of Inquiry to Find the Disappeared Persons

El Salvador (1992): Commission on the Truth for El Salvador

Germany (1992): Study Commission for the Assessment of History and Consequences of the SED Dictatorship in Germany

Guatemala (1994): Historical Clarification Commission

Haiti (1994): National Truth and Justice Commission

Sri Lanka (1994): Commissions of Inquiry into the Involuntary Removal or Disappearance of Persons

Uganda (1994): Commission of Inquiry into Violations of Human Rights

South Africa (1995): Commission of Truth and Reconciliation

Ecuador (1996): Truth and Justice Commission

Nigeria (1999): Commission to Investigate Human Rights Abuses

Sierra Leone (1999): Truth and Reconciliation Commission

Peru (2000): Truth and Reconciliation Commission

South Korea (2000): Presidential Truth Commission on Suspicious Deaths

East Timor (2001): Commission for Reception, Truth and Reconciliation

Ghana (2001): National Reconciliation Commission

Panama (2001): Truth Commission to Investigate Human Rights Violations

Serbia and Montenegro (2002): Truth and Reconciliation Commission

Source: USIP 2005

---------------------------------------------------------------------------------------------------------------------------------------------------------------

Such truth commissions usually devoted less effort to establishing the truth—what really happened—than to organizing confession and reconciliation. But they certainly worked with both blame and credit. They provided an opportunity for oppressors to confess their wrongs, something like Raskolnikov’s final acceptance of his past under Sonya’s influence. They also allowed new national leaders to take and give credit for earlier suffering and present magnanimity.

Not all national leaders took that path. Mozambique’s President Joaquim Chissano, for example, rejected it.10 Still, Bolivia, Argentina, Uruguay, and twenty other countries found that they could work their way toward peace by welding together credit and blame in truth commissions. If the point was to make a definitive transition to democracy, most of the commissions failed. Only a minority of the regimes listed in Box 1.1 (notably Argentina, Uruguay, Chile, Germany, South Africa, and South Korea) have so far moved securely into democratic territory. But in all cases, public airing of the dark past assigned blame to the perpetrators while giving due credit to the victims, survivors, and successors. It drew a line between worthy and unworthy citizens. It then gave repentant perpetrators a chance to cross the line into rehabilitation.

JUDGING CREDIT AND BLAME

In firing an unsatisfactory worker, setting up a truth commission, and a thousand other assignments of credit or blame, people are making surprisingly similar judgments. They are making judgments of outcome, agency, competence, and responsibility. Truth commissions and other judges identify bad things that happened, look for their agents, decide whether the agents had the competence to produce the bad outcomes, and ask further whether the agents bear the responsibility for those outcomes because they acted with knowledge of the likely consequences.

Assigning credit or blame to someone, then, means identifying that someone as the agent who caused some outcome, whether meritorious or deplorable. It means making someone an effective agent. The more serious the outcome of the agent’s action, the greater the potential credit or blame. But assigning credit or blame also imputes responsibility to the agent: she didn’t do it accidentally, unwittingly, or out of pure impulse. Instead she performed more or less deliberately with knowledge of the likely consequences. What’s more, the agent must be competent, capable of deliberated action. We may scream at the toddler or dog that pulls a food-laden tablecloth from the table or thank our lucky stars that the toddler or dog set up a howl when a menacing stranger came through the door. But neither one gets blame or credit for a fully responsible act.11

Outcomes obviously vary in gravity. In the cosmic balance, a teenager’s firing in the 1940s pales by comparison with the wrongs addressed by truth commissions. Think of it in terms of an act’s impact on value. If an action has only a trivial impact on the value of assets and capabilities held by the people it affects, we estimate that value as close to 0. If, in contrast, whole lives are at stake, we estimate that value as high: close to 1, on a scale from 0 to 1. How much change in value the action produces measures its weight.

We must then distinguish between positive and negative changes in value: positive if an action enhances assets and capabilities, negative if the action diminishes assets and capabilities. Saving a dozen lives produces high positive value. Killing a dozen people—unless they happen to be enemy soldiers— produces high negative value. Combined with agency, competence, and responsibility, an outcome’s value (positive or negative) guides the assignment of credit and blame.

Another important qualification: responsibility does not necessarily equal cause. Your judgment, my judgment, and a medical specialist’s judgment as to what actually caused a given hospital patient to die often turn out to be irrelevant for the assignment of blame. Cause-effect connections usually play only a secondary and contingent part in determination of responsibility. That determination typically emphasizes judgments of intent and competence. Even legal proceedings for adjudication of responsibility normally center not on exactly what caused a given outcome, but on what the average competent person (whether doctor, lawyer, engineer, or ordinary citizen) is supposed to know and do.

A time-honored legal doctrine defines the “reasonable person” as a standard for such judgments. Here is the definition from Black’s Law Dictionary (7th edition):

A hypothetical person used as a legal standard, esp. to determine whether someone acted with negligence. The reasonable person acts sensibly, does things without serious delay, and takes proper but not excessive precautions—Also termed reasonable man; prudent person; ordinarily prudent person; reasonably prudent person.

The legal dictionary goes on to say that the reasonable person is not simply the average person but the prudent one.

It also defines reasonable care: “As a test of liability for negligence, the degree of care that a prudent and competent person engaged in the same line of business or endeavor would exercise under similar circumstances.” In the case of medical malpractice, for example, testifying physicians speak mainly about the prevailing standards of practice in their field for the treatment of a given condition, not about exactly what caused the disability or death in question.12 Judge and jury must decide whether the medical personnel accused of malpractice followed widely accepted procedures.

If an act produces no change in the status quo—no change in value—no one receives credit or blame. The more it increases value, the greater the credit, but only to the extent that the agent exercises competent responsibility. Saving a life accidentally garners less credit than saving a life deliberately, especially if the lifesaver faces serious risks by doing so.

Advice books by famous successful corporate executives typically take credit in exactly that way. Through my own deliberate efforts, they say, I added to my corporation’s value. Real estate mogul and TV star Donald Trump illustrates the genre. His brash, boastful book How to Get Rich tells you how to emulate him. If you do so, maybe you, too, will make five billion dollars:

More and more, I see that running a business is like being a general. Calling the shots carries a great deal of responsibility, not only for yourself, but for your troops. Your employees’ lives, to a large extent, are dependent on you and your decisions. Bad strategy can end up affecting a lot of people. This is where being a leader takes on a new dimension. Every decision you make is an important one, whether there are twenty thousand people working for you or just one.13

To make big money, be decisive, daring, clear, and focused. Your competence will add value to the activity. Of course, it’s a lot easier to take that sort of credit if you’re a powerful boss, and everyone who knows otherwise remains under your thumb.

Similarly, former General Electric CEO Jack Welch boasts that his “candor” made the difference:

From the day I joined GE to the day I was named CEO, twenty years later, my bosses cautioned me about my candor. I was labeled abrasive and consistently warned that my candor would soon get in the way of my career.
   Now my GE career is over, and I’m telling you that it was candor that helped make it work. So many more people got into the game, so many voices, so much energy. We gave it to one another straight, and each of us was better for it.14

Welch’s honest appraisal of performances, he tells us, made the company hum. It added value. Jack Welch’s responsibility and competence produced the positive outcome. No wonder that Barry Diller, himself one of America’s most influential and best paid corporate executives, exclaims that “Jack is a life force.”15

Blame operates in the opposite direction. The more an act decreases value, the greater the blame, but again only to the extent that the agent exercises competent responsibility for the act. In cases of corporate corruption, stockholders, courts, and the general public put plenty of energy into figuring out who knew enough to cheat, and did it. The combination of competence and deliberate promotion of what turns out to be a corporate disaster brings serious blame. Something similar happens regularly in politics. A member of the apartheid era South African Defense Forces (SADF) whose gun went off unexpectedly but lethally during a demonstration receives less blame than one who chased down an activist and shot him point-blank.

Still, we might not want to let the SADF professional off scot-free. Given the cruelty of apartheid, we might well decide that the very act of joining the regime’s repressive forces deserved full blame. After all, it directly engaged the agent’s responsibility and connected him or her with the immense harm those forces did to the country’s African population. We would still be chaining together judgments of agency, outcome, responsibility, and competence. On the whole, victims of visible damage do not settle for “Things happen. It was the breaks.” They look for someone or something to blame.

BLAME AND WITCHCRAFT

South African witchcraft provides another very different version of blame from the blame for racial segregation and oppression. By 1990, the apartheid regime that had run the country since 1948 was collapsing. After 27 years in captivity, African National Congress (ANC) leader Nelson Mandela became a free man that year. Although the National Party’s F. W. de Klerk still formally held power, he was governing in close collaboration with Mandela and the ANC. Mandela would win the next presidential election, in 1994.

In 1990, Australian-American political analyst Adam Ash-forth published an important book on apartheid’s legal and political history. On a visit to South Africa, Ashforth then almost accidentally began a new career as a political ethnographer. From 1990 onward, he lived repeatedly with a family in Soweto (South West Township), the huge African settlement situated a few miles southwest of Johannesburg. Ashforth became part of Soweto’s social life. Despite standing out as a tall Caucasian, he quickly settled into local routines. He hung out with youths on Soweto streets, drank Castle beer with friends in shebeens, and played the violin in Zulu bands.

During a postdoctoral year in the 1980s, Ashforth had worked on his apartheid book in the research center I ran at the New School for Social Research, in New York City. We became good friends. When lectures and conferences brought me to South Africa in 1990, Ashforth invited me to stay with him in his new Soweto home. Among a number of other friends, he introduced me to a young man named Madumo. He later wrote an extraordinary book about Madumo’s encounter with witchcraft.

Ambitious Madumo tried hard to get himself a university education, and persuaded Ashforth to help support his tuition and fees. (After meeting me, Madumo also tried to hit me up for school money, but somehow I evaded him.) When Ashforth returned to Soweto from New York one time in the later 1990s, however, Madumo had dropped out of sight. He no longer lived with his family, and had lost contact with almost all of his friends. Ashforth tracked him down, learning that Madumo had become a victim of witchcraft. More exactly, his younger brother and sister had accused him of witchcraft, and thrown him out of the family house. When their mother had died, other family members suspected that the uppity Madumo had fed her lethal magic herbs. Madumo knew he had not done the deed, but he came to think that someone or something had cursed him.

By the time Ashforth found him, Madumo had become convinced that he could only remove the curse by appeasing his angry ancestors. As he pulled on a cigarette, he complained to Ashforth:

“Have you ever known me like this before? No. I’m telling you, something is seriously wrong. Seriously wrong. And I’ve tried so hard. But look what happened. Look at me! I’m an outcast. Even my family have turned their backs on me. Even my friends. Why?”
   He inhaled again. “There must be a reason. There must. So that is why I’m questioning about these ancestors.” He paused and looked up from where he had been studying a scorch mark in the blanket. I met his eyes but had nothing to say. “You know yourself, Adam. I never used to be thinking too much about these things of witchcraft and ancestors, even if my mother was spiritually inclined. But now, I have to face it. Something is wrong. Seriously wrong. I can’t deny it.”
   “So you blame your ancestors?”
   “It’s not blame, exactly,” he explained. “It’s like they are forgetting me. Forgetting me because they think I’ve forgotten them.”16

Madumo went on to say that the ancestors were refusing to protect him, no doubt because he had ignored them. When his mother was alive his brothers had gone to honor the ancestors’ graves, but Madumo had dismissed those practices as expensive superstitions. Now he regretted his neglect.

Reluctantly, Ashforth concluded that he would have to help his friend conciliate the ancestors. The first phase consisted of Madumo’s purification under the direction of an African healer, aided by members of an evangelical church— weeks of forced vomiting and anxious conversations with prophets. Then came three more phases: time to slaughter a chicken and hold a small feast in honor of Madumo’s late mother, a wrenching, futile attempt to reconcile with the brother and sister that accused him of witchcraft, and a period of estrangement between Madumo and Ashforth.

Finally, on Ashforth’s money, Madumo returned to the land of his ancestors, near the Botswana border, for his rite of expiation. That meant traveling to his ancestors’ graves, organizing the sacrifice of a ram, sponsoring the brewing of beer, and staging a large ceremonial party for all who cared to attend, while Madumo himself abstained from the meat and beer. Ashforth’s book chronicles the ordeal, and Madumo’s recovery from the curse.

As a staunch western rationalist I long resisted Ashforth’s claim that witchcraft was a compelling reality. But Ashforth eventually convinced me that in South Africa and elsewhere belief in its efficacy strongly shapes social life. In a later book, Ashforth stood back to reflect more generally on South Africa’s problem with witchcraft. He saw it as threatening the country’s hard-won democracy for two reasons. First, as some Africans move ahead while the great mass lose ground, the envy that feeds suspicions of witchcraft becomes more prevalent and pernicious. Second, the official denial of witchcraft means that from top to bottom the government fails to confront a problem felt acutely by ordinary South Africans in their daily lives: if the government can’t check witches, how can it possibly improve life for most the population?

Ashforth ponders the situation:

No one can understand life in Africa without understanding witchcraft and the related aspects of spiritual insecurity. For those of us who derive our understanding of the world from the heritage of the European Enlightenment, however, witchcraft in the everyday life of Africa is enormously difficult to fathom. Many Africans insist that we should not even try, arguing that the outsider’s interest in African witchcraft is merely a voyeuristic trifling with the exotic, a distraction from the more important issues of poverty, violence, and disease pressing upon the continent. They remind us that throughout the history of colonialism, not only were European attitudes to African spirituality derogatory, but the colonial fascination with African witchcraft served to perpetuate stereotypes of African irrationality and grounded colonial claims that Africans were incapable of governing themselves without white overlords. I might be inclined to agree with them were it not for the fact that I have seen too much of the damage that the fear of witchcraft can cause.17

Although Ashforth got some credit for helping rescue Madumo from witches, witchcraft in general does not much concern credit. South Africans don’t boast about becoming witches. But witchcraft centers on blame. It poisons social life by infusing personal relations with the suspicion that a bad outcome occurred because someone else exercised enough agency, responsibility, and competence to cause grievous harm.

The case of witchcraft clarifies a crucial feature of credit and blame. Far beyond the assignment of credit and blame, people across the world typically package their social experiences in stories: explanatory narratives incorporating limited numbers of actors, just a few actions, and simplified cause-effect accounts in which the actors’ actions produce all the significant outcomes.18 Stories that Madumo’s family and friends told about him almost ruined his life. Stories simplify. Witchcraft stories require little more than a witch, an act of witchcraft, an object of witchery, and an evil outcome. Their very simplicity increases their power.

Stories matter greatly for social life in general because of three distinctive characteristics:

  1. Stories belong to the relationships at hand, and therefore vary from one relationship to another; a mother gets a different story of a broken love affair than does a casual friend.
  2. They rework and simplify social processes so that the processes become available for the telling; “X did Y to Z” conveys a memorable image of what happened.
  3. They include strong imputations of responsibility, and thus lend themselves to moral evaluations. This third feature makes stories enormously valuable for evaluation after the fact. It also helps account for people’s changing stories of events in which they behaved less than heroically.

As compared with scientific accounts of the same events or outcomes, everyday stories radically simplify cause-effect connections. They trot out a few actors whose dispositions and actions cause everything that happens within a limited time and place. The actors sometimes include supernatural beings and mysterious forces—for example, in witchcraft, as an explanation of misfortune—but the actors’ dispositions and actions explain what happened. Madumo’s family packaged their accusations of witchcraft into stories about what he had done to his mother. Madumo then adopted the story of having offended his ancestors.

As a result, stories inevitably minimize or ignore the intricate webs of cause and effect that actually produce human social life.19 Adam Ashforth lost his struggle to substitute a western rationalist story of Madumo’s troubles for Madumo’s family’s stories and Madumo’s own stories about those troubles. But stories lend themselves beautifully to judgment of the actors and to assignment of responsibility. They provide marvelous vehicles for credit and blame.

THE POLITICS OF CREDIT AND BLAME

To be sure, all stories don’t simplify equally. Dostoevsky’s Crime and Punishment, after all, overflows with stories, many of which we only start to understand as other stories about Raskolnikov and his loved ones fall into place. But the great bulk of stories we hear and tell in everyday life convey their agents, causes, and effects in radically simplified ways: someone did something to someone else, and that caused some outcome.

Although deals and compromises fill the back streets of politics, its great plazas teem with stories of credit and blame. A great deal of public politics in the United States and elsewhere consists of taking or denying credit, assigning or resisting blame. The country’s very founding document, the 1776 Declaration of Independence, adroitly combined credit and blame. Speaking for the “Representatives of the United States of America in General Congress Assembled,” a final statement of complaints declared:

Nor have we been wanting in attentions to our British brethren, we have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us, we have reminded them of the circumstances of our emigration and settlement here, we have appealed to their native justice & magnanimity, and we have conjured them by the tyes of our common kindred, to disavow these usurpations, which would inevitably interrupt our connections & correspondence, they too have been deaf to the voice of justice and of consanguinity; we must therefore acquiesce in the necessity which denounces our separation and hold them, as we hold the rest of mankind, enemies in war, in peace friends.20

Here’s the story: They—“our British brethren”—had the agency, responsibility, and competence to prevent the sad outcome, and therefore shared the blame with king and Parliament.

A Committee of Five drafted the Declaration: John Adams, Benjamin Franklin, Thomas Jefferson, Robert Livingston, and Roger Sherman. The five knew from the start that they had to levy strong enough charges against the British king and Parliament to justify the drastic step of repudiating British rule.21 They took credit for American forbearance, and assigned blame primarily to the king. But they also blamed a parliament that failed to resist royal tyranny.

George Washington was away mustering the Continental Army in New York while his comrades were writing the Declaration. But as the successful rebels’ first president (1789– 1797), he worked both the back streets and the great plazas skillfully. As he approached the end of his second term on 18 September 1796, Washington delivered a farewell address we still read today as a model for public credit and blame. During his term, Washington had overseen the consolidation of the federal government and the securing of U.S. borders. But he had also faced the formation of political parties, the outbreak of a great European war, and a major Pennsylvania insurrection—the Whiskey Rebellion—against the government’s fiscal authority. Echoes of all those events appear in the text of Washington’s address to his countrymen.

Washington set a modest tone for his taking of credit:

In the discharge of this trust, I will only say, that I have, with good intentions, contributed towards the organization and administration of the government the best exertions of which a very fallible judgment was capable. Not unconscious, in the outset, of the inferiority of my qualifications, experience in my own eyes, perhaps still more in the eyes of others, has strengthened the motives to diffidence of myself; and every day the increasing weight of years admonishes me more and more, that the shade of retirement is as necessary to me as it will be welcome. Satisfied that, if any circumstances have given peculiar value to my services, they were temporary, I have the consolation to believe, that, while choice and prudence invite me to quit the political scene, patriotism does not forbid it.22

Thus he rebuffed any attempt to make him king or president

for life. John Adams stood by, ready to take over from him. Later in the address, Washington blamed without naming names. He warned against sectionalism, against advocates of involvement in foreign wars, against “faction”:

All obstructions to the execution of the Laws, all combinations and associations, under whatever plausible character, with the real design to direct, control, counteract, or awe the regular deliberation and action of the constituted authorities, are destructive of this fundamental principle [the duty of every individual to obey the established Government], and of fatal tendency. They serve to organize faction, to give it an artificial and extraordinary force; to put, in the place of the delegated will of the nation, the will of a party, often a small but artful and enterprising minority of the community; and, according to the alternative triumphs of different parties, to make the public administration the mirror of the ill-concerted and incongruous projects of faction, rather than the organ of consistent and wholesome plans digested by common counsels, and modified by mutual interests.23

Despite advocating a small government with a modest military establishment, Washington called for obedience to that government’s decisions, and blamed Americans who plotted against obedience.

More than two hundred years later, American politics still pivots on credit and blame. The al-Qaeda–coordinated attacks in New York and Washington, DC on September 11th, 2001 started an epidemic of credit and blame. As a New Yorker, I was not immune. At 6:50 am the next day I sent out a message to my electronic mailing list on contentious politics. Nothing profound: my message called for students of the subject to avoid hysteria and to look systematically at causes and remedies of the sorts of terror we had just witnessed. The message closed:

Those of us who study contentious politics should resist the temptation to concentrate on ideas of repression and retaliation, which demagogues will surely broadcast. We may be able to make a small contribution to explaining how such high levels of coordination emerge among damage-doers, and therefore how to reduce threats of violence to civilians in the United States and, especially, elsewhere.

A bit of blaming appeared in the reference to “demagogues.” But the message only assigned credit to my fellow New Yorkers, who had generally shown sangfroid and solidarity.

Three days later, I followed up the message with another. This one offered predictions concerning what we would eventually learn about the New York and Washington attacks. It included unconditional predictions, for example that all plotters would eventually turn out to have ties, direct or indirect, to Osama bin Laden, but not all to be directly connected, or even known, to each other. It then went on to contingent if-then predictions, which ran as follows:

  • Bombing the presumed headquarters of terrorist leaders will (a) shift the balance of power within networks of activists and (b) increase incentives of unbombed activists to prove their mettle.
  • If the United States, NATO, or the great powers insist that all countries choose sides (thus reconstituting a new sort of Cold War), backing that insistence with military and financial threats will increase incentives of excluded powers to align themselves with dissidents inside countries that have joined the U.S. side, and incentives of dissidents to accept aid from the excluded powers.
  • Most such alliances will form further alliances with merchants handling illegally traded drugs, arms, diamonds, lumber, oil, sexual services, and rubber.
  • In Russia, Uzbekistan, Lebanon, Turkey, Sudan,
    Nigeria, Serbia, Algeria, and a number of other
    religiously divided countries, outside support
    for dissident Muslim forces will increase, with increasing connection among Islamic oppositions across countries.
  • Bombing the presumed originator(s) of Tuesday's attacks and forcing other countries to choose sides will therefore aggravate the very conditions American leaders will declare they are preventing.
  • If so, democracy (defined as relatively broad and equal citizenship, binding consultation of citizens, and protection from arbitrary actions by governmental agents) will decline across the world.

Although evidence on the connections with contraband trade and Muslim dissidents remains uncertain, none of these if-then predictions turned out flatly wrong. Of course, they missed some important points. In September 2001, for example, it never occurred to me that the 9/11 attacks would help justify an American invasion of Iraq. Considering that I made them in the shadow of 9/11, nevertheless, the predictions held up surprisingly well over the following years.

Most of the electronic responses to my posting that flooded in expressed support or offered friendly amendments to my predictions. A few, however, called me a paranoid subversive. About a year later, the White House issued a declaration that made my predictions look less paranoid than they might have seemed in the immediate aftermath of 9/11.

A document called the National Security Strategy (NSS), issued by President George W. Bush on 17 September 2002, claimed broad rights for the sole remaining superpower. It took credit for the victory of freedom and equality over “destructive totalitarian visions.” It blamed the “embittered few” for current threats to “our Nation, allies, and friends.”24 It described Afghanistan as “liberated,” Iraq and North Korea as “rogue states” in the process of acquiring weapons of mass destruction.25 Although President Bush had bracketed Iran with Iraq and North Korea in the “axis of evil” identified by his speech of 29 January 2002, the NSS blamed Iraq and North Korea especially for the world’s terrorist threats.

The NSS said that the era of state-to-state war had passed; terrorists had changed the terms of international relations. As disillusioned neoconservative Francis Fukuyama summed up:

What was revolutionary about the NSS was its expansion of traditional notions of preemption to include what amounted to preventive war. Preemption is usually understood to be an effort to break up an imminent military attack; preventive war is a military operation designed to head off a threat that is months or years away from materializing. The Bush administration argued that in an age of nuclear-armed terrorists, the very distinction between preemption and prevention was outmoded; the restrictive definition of the former needed to be broadened. The United States would periodically find it necessary to reach inside states and create political conditions that would prevent terrorism. It thereby rejected Westphalian notions of the need to respect state sovereignty and work with existing governments, tacitly accepting both the neoconservatives’ premise about the importance of regimes and the justifications for the humanitarian interventions undertaken during the 1990s.26

Six months before the U.S. attack on Iraq, the United States was declaring its right to prevent terrorism by outright military intervention. It blamed rogue states for their threat to peace, and claimed credit for the United States as the guarantor of world order. Like other American political centers, the White House was actively deploying credit and blame.

Fukuyama’s dissent from U.S. military policy identifies an important feature of credit and blame I’ve only hinted at so far. When a sharp us-them boundary separates blamer and blamed, the very actions for which A blames B are often actions for which B’s supporters give B credit. That occurs most obviously in the case of war, where killing that looks barbarous to one side looks heroic to the other. In the world of nationalist struggles, critics often point out that one person’s terrorist is another person’s freedom fighter. In the world of city administration, what one side calls urban renewal opponents often call real estate profiteering.

Us-them boundaries cut across much of politics. As a result, disputes over whether a given action deserves credit or blame figure regularly in political debate. In the case of 9/11, almost all Americans (including me) deplore the suicide bombers’ taking of innocent lives. But for Osama bin Laden’s supporters, it still counts as a telling blow against American imperialism. For them, it deserves credit, not blame.

CREDIT AND BLAME REVISITED

All these cases of political crediting and blaming identified relations between those who passed judgment and those who received judgment. Even my timid appeal to fellow students of contentious politics claimed a right to judge both western politicians and the enemies they were condemning. In every case, furthermore, the judges were crediting or blaming some specific agent (sometimes themselves) for a particular good or bad outcome, which meant assigning them both competence and responsibility for that outcome. They were fulfilling the relational conditions for credit and blame.

We therefore have our work here cut out for us. The work: clarifying the social processes by which people arrive at assignments of credit and blame. Let me repeat that this book concentrates on social processes, in which people interact with each other. Neuroscientists have been making great advances in describing, and even explaining, how individual nervous systems process cognition.27 I try hard to avoid descriptions and explanations that contradict what scientists are learning about how the human nervous system generates recognition of right and wrong behavior, but that is not the book’s subject. Instead, chapters to come concentrate on interpersonal transactions and relations, including the telling of stories about credit and blame.

The next chapter, then, goes farther than this introduction. It identifies the connections between justice, credit, and blame. It shows how people single out individuals and groups for approval or disapproval, and how they match appropriate rewards or punishments with the degree and character of their credit and blame. Chapters 3 and 4 take closer looks at credit, then at blame, considered separately. Chapter 5 closes the book by analyzing what happens when authorities (from prize committees to governments) start organizing memorials to victory, loss, and blame.

Return to Book Description

File created: 4/25/2008

Questions and comments to: webmaster@pupress.princeton.edu
Princeton University Press

New Book E-mails
New In Print
PUP Blog
Videos/Audios
Princeton APPS
Sample Chapters
Subjects
Series
Catalogs
Textbooks
For Reviewers
Class Use
Rights
Permissions
Ordering
Recent Awards
Princeton Shorts
Freshman Reading
PUP Europe
About Us
Contact Us
Links
F.A.Q.
PUP Home


Bookmark and Share