“Man is evil, by nature man is a beast. People have to be educated from childhood, from kindergarten, that there should be no hatred.” - Marek Edelman.
The Purge movie franchise portrays a world where citizens of the United States get to rid themselves of all evil by listening and subjecting their actions to their most carnal desires within a 12-hour window. During this time, all criminal activities are legalized. While it admittedly sounds counterintuitive, the idea is that if society allowed people to do whatever they wanted, including especially criminal activities for a short window of time, they would exhaust the evil that is in them, and for the rest of the year, as the movie portrays, will be all happy and peaceful.
The low crime rate, vanishing poverty and a stronger sense of community are all things the movie portrays as positives. This, however, implies an interesting underlying phenomenon about us humans - that, deep down, we are all evil and all we want to do is kill, murder and steal. And that if left to our own devices, this is what we would all do. Now, without taking the script of a science-fiction movie too seriously, it is still an interesting question to pose. Are humans inherently evil? On the face of it, you and I would certainly like to believe we are moral creatures. But if we do indeed believe that we must be, then ask how this morality came to be in the first place. Why is it in us?
Psychology and neuroscience both tell us that morality developed out of an evolutionary need. People that distinguish good from bad and do so predictably are much better at banding together and making social companions than those who are not. A partner who can sacrifice you at any moment for their personal gain is not much of a partner. So it provides a selective advantage to one, be moral, and two, be able to notice morality in others. It is not a surprise, then, that our brain has not just one, but numerous regions that work together to bring us our moral existence. Some areas, such as the medial prefrontal cortex are used solely to understand one’s own emotions. Others, such as the posterior superior temporal sulcus, are key to understanding the feelings of others.
This idea that our brains evolved to be moral and recognize morality is further reinforced by research done on babies. Studies conducted at the Yale University Infant Cognition Center, also known as the Baby Lab, involved children under 24 months. They were shown a grey cat struggling to open a plastic box. Researchers showed the cat in two scenarios. One in which a bunny in a green t-shirt comes forward to help the cat open the box, and another in which a bunny in an orange t-shirt not just doesn’t help, but rather makes it worse by slamming the box shut. The babies were then shown both bunnies side by side and their reactions were monitored. Scientists observed if the babies reached out or stared more than normal at one bunny, with the inference between that they preferred it over the other. It is important to note here that the scientists assumed such responses to be positive.
Over 80 percent of the babies showed a preference for the good bunny, the one that helped, which in this case was the one in the green t-shirt. With a much younger group of 3-month-olds, this number surprisingly goes up even higher to 87 percent. Now, I know what you may be thinking. What if babies are simply drawn to one color more than the other? Well, when researchers at the Baby Lab switched the colors the results were still similar. While the confidence with which claims are made on babies’ ideas about morality varies, the overall conclusion is that they generally seem to at least prefer nicer people, objects and animals.
Universal moral grammar, or UMG, is another emerging field of research which seeks to rigidly define moral knowledge. UMG wants to answer questions like how moral knowledge is acquired, how it is actualized in the brain, and so forth. One of the interesting aspects of this research is its focus on language. More specifically, its focus on the naturally evolving set of terms that make moral distinctions. For example, in English, we have words to describe something as permissible, obligatory or forbidden. These words did not simply come to be. They are manifestations of our need to express the subtleties of our morality. Pretty much all other languages display a similar phenomenon. The fact that these subtleties are felt and are expressed across cultures is proof of the underlying morality that we all possess. Or do we?
If you’re watching this video right now, then you may have heard of the 1971 Stanford prison experiment. If you haven’t, it’s the infamous two-week prison simulation study that had to be shut down after six days. It was designed to observe how paid participants placed in positions of power — the guards — react when interacting with people who were under their jurisdiction — the prisoners. In a conversation with one of the guards, a prisoner said the study was harmful to him because of what it revealed to him. “Just to think about how people can be like that. It let me in on some knowledge that I’ve never experienced first-hand,” he said. What he was talking about was how after the first day of relatively courteous interactions, the atmosphere within the lab/prison turned abusive. The prisoners, known not by their names but rather a number, felt dehumanized in the already dehumanizing atmosphere of the jail cells.
The guards, meanwhile, were equipped with aviators to remove any eye contact, which only further distanced them from any sort of human connection to the prisoners. Although the study has come under scrutiny in recent years, it still is remarkable to see how supposedly ordinary people turned tyrannical in less than 24 hours. One of the guards in question was Dave Eshelman, who in an interview that took place nearly five decades after the experiment expressed that his actions were only in anticipation of an expectation of what a guard should do, rather than something he would have organically done on his own. This is known as the demand characteristic, and is often cited as a bias in psychology research. It is the idea that research participants can sometimes feel the need to be a “good participant” once they either know or assume a hypothesis that is being tested. In this instance, the guards knew they had to be guards and therefore were likely to be susceptible to being harsher., Not because they are bad people, but because they think it is expected of them.
Another one of the biggest criticisms of the study is its selection bias. The ad published to recruit the participants mentioned right away that it was about prison life. That attracts a group of people that may be more interested in dominating in social situations, even if purely out of a point of curiosity. Dave agreed to this as well, saying that he was an abusive guard because each day he was curious to try something that would up the ante. Hidden behind the anonymity of their aviators, however, Dave, like many others in the study, questioned if there was a point when they stopped acting and started living.
Perhaps deep down this was who they were, and society’s rules and moral codes were the only things keeping them in check. And it wasn’t the only analysis of human behavior that took this approach. Stanford researchers may have been influenced by the Milgram experiments of the 1960s. In this case, psychologists wanted to analyze whether humans could be compelled to do things that are clearly against their conscience out of pure obedience.
This followed the revelations of the Nuremberg trials and the war crimes committed during the Holocaust. In 1962, Adolf Eichmann, an organizer behind the Holocaust, stated that he was only carrying out actions he was ordered to do by his superiors. The research was trying to discover whether the defendant’s actions were purely out of obedience, or if deep down, the men were truly evil.
Participants in the study were told they were being given either the role of teacher or learner. The learner went to a separate room, and the teacher began asking them questions. If any of the answers were incorrect, the teacher was told to administer an electric shock to the other participant through electrodes attached to their arms. As the experiment progressed, the teacher was ordered to raise the shock wattage for every failed question. What the teachers didn't know was that the learner was always the same person, trained by Milgram to react with screams of pain to shocks that were faked.
The purpose of this study was to see just how far someone would go in the face of authority to shock another person whilst listening to them suffering and pounding on the wall, pleading for the experiment to be stopped. Sixty-five percent of the participants went all the way to the maximum wattage the machine could inflict, while every single one of them went up to the intermediary wattage. Milgram summed it up by saying, “The extreme willingness of adults to go to almost any lengths on the command of an authority constitutes the chief finding of the study and the fact most urgently demanding explanation.”
Recreations of the Milgram study, albeit with certain modifications, suggested that today’s population is no less obedient than that of the 1960s. If anything, we are more obedient, which is contrary to what you might think. Apart from the reenactments of the Milgram study, other studies like the Asch line conformity experiments and Hofling hospital studies have also essentially come to the same conclusion. In the face of authority, we push away our conscience with alarming ease. What this tells us about humans being inherently good or bad is that perhaps we are neither. Instead, it is either coercion or the expectation of behavior that compels us to act in a certain way. And while that might be a rather hopeless takeaway in the grand scheme of things, what it still leaves some optimism for is the fact that we are not hardwired for evil.
It shows us that our morality, however innate it may be, is only as good as our surroundings. We are all hardwired with the ability to distinguish between good and bad, but what we end up doing is a hitherto undefined line between individual agency and circumstance. Footage from the Milgram study revealed one participant who, while eventually agreeing to raise the wattage, displayed visible concern for the person on the other end. He was doing something he did not want to do. Although that concern did not stop him from perpetrating the act, but as a glimmer of hope, we can at least take comfort in the fact that he did not want to do it. Ninety-five percent of the participants in the Hofling hospital study, which focused on nurses who were advised to administer an unsafe dosage of a drug, complied with that order. But a control group that was simply asked to discuss the matter instead of being flat out instructed to administer it rejected the idea 94 percent of the time.
It is not that the nurses were incapable of seeing what was right or wrong. The environment in which they worked, coupled with the power imbalance of the doctor giving them the orders, simply rendered their conscience irrelevant in that moment. Every single one of the participants in all the experiments mentioned who did something wrong all did these things because they had either convinced themselves or were convinced by others that for some reason, what they were doing was for the greater good. It is, therefore, not a question of whether humans are good or evil. We likely evolved to be good. But that is irrelevant. What is far more important is that we recognize circumstances that lead to evil.
Fritz Haber thought that he was “shortening the war” and “saving lives” when he created the gas weapons that would eventually be used to commit the Holocaust. War criminals from Unit 731 picked up people to run inhumane experiments on them because they would get killed in a war anyways, so they might at least be used in the name of science. And history is filled with examples of people justifying heinous acts using three very famous words: “I was ordered.” In the end, whether humans are inherently good or inherently evil, we may never know. But what we do know is that somehow, we all think of ourselves as heroes in our own stories and make up excuses for when we’re clearly not.