cult recovery 101

Stanley Milgram

Rethinking the Classic ‘Obedience’ Studies

Pacific Standard
Tom Jacobs

Stanley Milgram’s 1961 obedience experiments and the 1971 Stanford Prison Experiment are legendary. But new research adds new wrinkles to our understanding of allegiance and evil.

November 25, 2012

They are among the most famous of all psychological studies, and together they paint a dark portrait of human nature. Widely disseminated in the media, they spread the belief that people are prone to blindly follow authority figures—and will quickly become cruel and abusive when placed in positions of power.

It’s hard to overstate the impact of Stanley Milgram’s obedience experiments of 1961, or the Stanford Prison Experiment of 1971. Yet in recent years, the conclusions derived from those studies have been, if not debunked, radically reinterpreted.

A new perspective—one that views human nature in a more nuanced light—is offered by psychologists Alex Haslam of the University of Queensland, Australia, and Stephen Reicher of the University of St. Andrews in Scotland.

In an essay published in the open-access journal PLoS Biology, they argue that people will indeed comply with the questionable demands of authority figures—but only if they strongly identify with that person, and buy into the rightness of those beliefs.

In other words, we’re not unthinking automatons. Nor are we monsters waiting for permission for our dark sides to be unleashed. However, we are more susceptible to psychological manipulation than we may realize.

In Milgram’s study, members of the general public were placed in the role of “teacher” and told that a “learner” was in a nearby room. Each time the “learner” failed to correctly recall a word as part of a memory experiment, the “teacher” was told to administer an electrical shock.

As the “learner” kept making mistakes, the “teacher” was ordered to give him stronger and stronger jolts of electricity. If a participant hesitated, the experimenter—an authority figure wearing a white coat—instructed him to continue.

Somewhat amazingly, most people did so: 65 percent of participants continued to give stronger and stronger shocks until the experiment ended with the “learner” apparently unconscious. (The torture was entirely fictional; no actual shocks were administered.)

To a world still reeling from the question of why so many Germans obeyed orders and carried out Nazi atrocities, here was a clear answer: We are predisposed to obey authority figures.

The Stanford Prisoner Experiment, conducted a few years later, was equally unnerving. Students were randomly assigned to assume the role of either prisoner or guard in a “prison” set up in the university’s psychology department. As Haslam and Reicher note, “such was the abuse meted out to the prisoners by the guards that the study had to be terminated after just six days.”

Lead author Philip Zimbardo, who assumed the role of “prison superintendent” with a level of zeal he later found frightening, concluded that brutality was “a natural consequence of being in the uniform of a guard and asserting the power inherent in that role.”

So is all this proof of the “banality of evil,” to use historian Hannah Arendt’s memorable phrase? Not really, argue Haslam and Reicher. They point to their own work on the BBC Prison Study, which mimicked the seminal Stanford study.

They found that participants “did not conform automatically to their assigned role” as prisoner or guard. Rather, there was a period of resistance, which ultimately gave way to a “draconian” new hierarchy. Before becoming brutal, the participants needed time to assume their new identities, and internalize their role in the system.

Once they did so, “the hallmark of the tyrannical regime was not conformity, but creative leadership and engaged followership within a group of true believers,” they write. “This analysis mirrors recent conclusions about the Nazi tyranny.”

It also sounds familiar to anyone who has studied the rise of semi-autonomous terror cells in recent decades. Suicide bombers don’t give up their lives out of unthinking obedience to some religious or political figure; rather, they have gradually melded their identities with that of the group they’re in, and the cause it represents.

Similarly, the researchers argue, a close look at Milgram’s study suggests it really isn’t about blind obedience at all. Transcripts of the sessions show the participants are often torn by the instruction to administer stronger shocks. Direct orders to do so were far less effective than entreaties that they need to continue for the sake of the study.

These reluctant sadists kept “torturing” in response to appeals that they were doing important scientific work—work that would ultimately benefit mankind. Looked at in this way, it wasn’t some inherent evil or conformism that drove them forward, but rather a misplaced sense of idealism.

This interpretation is still quite unsettling, of course. If a person has has fully bought into a certain world view and believes he or she is acting on the side of right, this conviction “makes them work energetically and creatively to ensure its success,” Haslam and Reicher write.

So in the researchers’ view, the lesson of these two still-important studies isn’t about conformity or even cruelty per se. Rather, they reveal a dangerous two-step process, in which authority figures “advocate oppression of others,” and underlings, due in part to their own psychological makeup and personal histories, “identify with those authorities … who promote vicious acts as virtuous.”

So we may not be inherently evil, but it appears many of us can be enticed into believing that a heinous act is, in fact, good and necessary. Perhaps the real lesson of these startling experiments is the importance of learning how to think critically.

The most effective antidote to evil may be rigorous skepticism.

Evil, part 4: the social dimension

Clare Carlisle

Does contemporary society give rise to conditions more conducive to evil than in the past?

November 5, 2012

So far in this series I’ve considered evil as if it were an individual matter – a question of personal virtue, or the lack of it. In emphasising the relationship between sin and freedom, Christian philosophers such as Augustine seem to assume that if we look hard enough at the human condition we will gain insight into evil. This attitude implies that evil has nothing to do with history or culture – as if the fall is the only historical event that matters, at least as far as evil is concerned.

In the 20th century, a series of scientific experiments on the psychology of evil told a very different story. Among the most infamous of these are the experiments at Yale and Stanford universities conducted in the 1970s by Stanley Milgram and Philip Zimbardo. Both Milgram and Zimbardo found that, under certain conditions, well-educated and apparently ordinary university students were capable of immense cruelty. Under the instructions of an authority-figure, Milgram’s students were prepared to administer painful electric shocks as a penalty for poor memory: two-thirds of them increased the voltage to lethal levels as their “subjects” cried in agony. These results demonstrated how dangerous and immoral obedience can be. In his experiment, Zimbardo created a prison environment in the psychology department at Stanford, assigning roles of guard and prisoner to his group of undergraduates. Within a few days guards were treating prisoners with such cruelty and contempt that the experiment had to be terminated early.

Reflecting on his Stanford prison experiment in 2004, Zimbardo wrote eloquently about the conditions that make good people do evil things. The prison, he suggested, is an institution set apart from normal society in which brutality can be legitimised. Wearing uniforms and sunglasses, identifying prisoners by numbers and guards by official titles and removing clocks and blocking natural light all helped to dehumanise and deindividualise the participants. In this “totally authoritarian situation”, says Zimbardo, most of the guards became sadistic, while many of the prisoners “showed signs of emotional breakdown”. Perhaps most interestingly, Zimbardo found that he himself, in the role of prison superintendent, rapidly underwent a transformation: “I began to talk, walk and act like a rigid institutional authority figure more concerned about the security of “my prison” than the needs of the young men entrusted to my care as a psychological researcher.”

Although Zimbardo insists that “there were no lasting negative consequences of this powerful experience”, his conclusions raise ethical questions about scientific experimentation itself. Does the laboratory, like the prison, provide a special kind of environment in which pain can be inflicted with approval? Do the white coats and the impersonal manner of recording results dehumanise both scientists and their subjects?

These questions point to a larger philosophical issue. Does contemporary society give rise to conditions more conducive to evil than in the past? Do science and technology, in particular, dehumanise us? Modern technology has certainly created forms of communication that allow people to remain more safely anonymous. Take the internet, for example; it’s right here. In recent years the malevolent online behaviour of internet trolls and vitriolic commentators, hiding behind their pseudonyms, has become a much-discussed cultural phenomenon. Maybe it’s quite natural that we have a delicious taste of freedom and power when given the opportunity to go undercover – like Stevenson’s Jekyll-turned-Hyde as he runs gleefully through the night to the wrong side of town, stamping on children as he goes. But in such circumstances are we really in control? Milgram’s electrocutors thought they were in control, and so did Philip Zimbardo. It turned out, of course, that they too were part of the experiment.

As usual, Plato has something to contribute to this debate. In the Republic Socrates’ pupil Glaucon recounts the story of a shepherd,Gyges, who fell into the earth during an earthquake and found a ring that made him invisible. “Having made this discovery,” says Glaucon, “he managed to get himself included in the party that was to report to the king, and when he arrived he seduced the queen and with her help attacked and murdered the king and seized the throne.”

Plato uses this story to depict the prevailing immorality within his own Athenian society – a society which had, after all, sentenced to death its wisest and most virtuous citizen. Plato suggests that his contemporaries regard hypocrisy and deceit as the surest route to happiness, since they seek all the benefits of a reputation for virtue, or “justice”, while promoting their own interest by vice, or “injustice”, wherever possible. In the Republic he argues, through the voice of Socrates, that this view is not only morally wrong but misguided, since true happiness and freedom can only come from living virtuously.

The story of Gyges’s ring seems to suggest that evil is a simply a fact of human nature. When anonymity releases us from responsibility for our actions, we will gladly abandon morality and harm anyone who obstructs our pursuit of what we think will make us happy. In this way, we might point to Gyges in arguing that there is nothing particularly modern about evil. On the other hand, though, Plato had to resort to a myth, and a magic ring, to illustrate the conditions under which our tendency to evil manifests itself. In our own time, technology has worked its magic, and the fantasy of invisibility has become an everyday reality.