Algorithms are Destroying Society

In 2013, Eric Loomis was pulled over by the police for driving a car that had been used in a shooting. A shooting, mind you, that he wasn’t involved in at all. After getting arrested and taken to court, he pleaded guilty to attempting to flee an officer and no contest to operating a vehicle without the owner’s permission. His crimes did not mandate prison time. Yet, he was given an 11-year sentence, with six of those years to be served behind bars and the remaining five under extended supervision.

 

Not because of the decision of a judge or jury of his peers, but because an algorithm said so. The judge in charge of Mr. Loomis’ case determined that he had a high risk of  recidivism through the use of the Correctional Offender Management Profiling for Alternative Sanctions  risk-assessment algorithm, or COMPAS. Without questioning the decision of the algorithm, Loomis was denied probation and incarcerated  for a crime that would usually not carry any time at all. 


What has society become if we can leave the fate of a person’s  life in the hands of an algorithm? When we take the recommendation of a machine as truth even when it seems so unreasonable and inhumane? Even more disturbing is the fact that the general public doesn't  even know how COMPAS works. The engineers behind it have refused to disclose how it makes recommendations and are not obliged to by any existing law. Yet, we’re all supposed to blindly trust and adhere to everything it says. 


Reading about this story, a few important questions come to mind. How much do algorithms control our lives and ultimately, can we trust them? It’s been roughly 10 years since Eric Loomis’s sentencing and algorithms now have a far greater penetration into our daily life. From the time you wake up to the time you go to bed, you’re constantly interacting with tens, maybe even hundreds, of algorithms. 


Let’s say you wake up, tap open your screen and do a quick search for a place near you to eat breakfast. In this one act, you’re triggering Google’s complex algorithm that matches your keywords to websites and blog posts to show you answers that are most relevant to you. When you click on a website, an algorithm is used to serve you ads on the side of the page. Those ads might be products you’ve searched for before, stores near your location or eerily enough, something you’ve only spoken to someone about. 


You then try to message a friend to join you for your meal. When you open any social media app today, your feed no longer simply displays the most recent posts by people you follow. Instead, what you see can be best described by TikTok’s For You page. Complex mathematical equations behind the scenes decide what posts are most relevant to you based on your view history on the platform.


YouTube, Twitter, Facebook and most notoriously TikTok all use these recommendation systems to get you to interact with content that their machine thinks is right for you. And it’s not just social media. Netflix emails you recommendations of movies to watch based on what you’ve already seen, Amazon suggests products based on what you’ve previously  bought. And probably the most sinister of all, Tinder recommends you the person you’re supposed to spend the rest of your life with, or at least the night. 


These might seem like trivial matters, but it’s more than that. Algorithms are also used to determine who needs more healthcare, and when you have your day in court and a computer program decides whether you spend the next decade of your life behind bars for a crime that usually doesn’t carry any time. One of the most dangerous things about algorithms is the data that is used to power them, because the more data you feed into an algorithm, the better its results. And where do companies get this data? It’s from their users like you and me. 

 

I’m not saying that all algorithms are bad and we should get rid of them. Heck,  an algorithm is probably the reason you’re watching this video in the first place. I’m saying we as a society need to make some changes to the way we currently interact and use these systems. One of the scariest things about algorithms is that they’re built and altered in a black box with little oversight. The engineers behind them determine what we see and don’t see. They classify, sort, order and rank. And we don’t get to know how or why. Even the government doesn’t get to know how and why, and if they did, would they understand it? 

 

The engineers themselves often don’t know why an algorithm behaves the way it does. They use AI and machine learning which can make the outcomes become hard to predict. They become a mystery to makers as well. When companies like Google or Facebook are challenged about their platforms after something terrible happens, they hide behind the mythos of the algorithm. They’re cold, unbiased systems they suggest. They’re rational. To err is human, not machine, they claim. 

 

This is the notion of algorithms that is potentially dangerous. We think of them as pillars of objectivity, incapable of the kind of biases that corrupt human society. But are they genuinely unbiased? Are they pure instruments of rationality? As much as big tech companies would like you to believe they are, the sad truth is they are not. When the engineers choose to classify and sort, they’re using pre-existing classifications which are filled with bias already. And their methods of sorting enforce biases that can have real negative consequences. 

 

In 2019, an algorithm was used on more than 200 million patients in  U.S. hospitals to determine who would need more care. Although race was not included in the criteria, black patients were discriminated against by the machine anyway. They were determined to require less care than white patients. How did this happen if race wasn’t even an input, you might ask?

 

Well, while race directly wasn’t in the equation, previous healthcare expenses were  a determining factor in deciding whether someone would need more care. And because Black patients have historically spent less on healthcare, the results were that they required less care, an incorrect blanket conclusion for situations that should have case-by-case evaluations.

 

Although the racial bias was unintended, it still occurred as a result of the engineers’ designs. It’s because of issues like these that we cannot hide behind the myth of the infallible machine. Biases like these will exist in machines as long as humans are still the ones building them. And there is one bias that exists in almost every algorithm we use today with far more reaching consequences. 

 

Meta, Twitter, Google, Amazon, Netflix, Tinder. Most  tech companies and the platforms they offer to you and me as services design their algorithms to maximize one thing, and one thing alone. Profit. These platforms generate revenue by primarily selling ads.. And to generate more ad revenue , they try to keep you on their platforms longer because the longer you’re there, , the more ads you will see and the more money they make. 

 

Take YouTube,  for example. There are three main things that make any video successful on the platform. Click-through rate, watch time and session time. So all YouTube cares about is can you get people to start watching your video? And can you keep them watching for as long as possible so we can serve them more ads? For the most part, this works as it’s supposed to and people get served content they enjoy but would have never found on their own. As  with everything in life though, there are downsides. 

 

People have learned to game the system by using clickbait  to lure viewers in, and then to push conspiracy theories that keep people glued to their screens, whether the information is factual or not. YouTube’s algorithm has also been accused of having a radicalizing effect on its viewers. Moderate content always leads to recommendations of more extreme content, which leads people down the notorious “rabbit hole”. You can start by watching videos about jogging and YouTube  would continue to recommend videos that push you further slightly, until you wake up one day and you’re watching videos about running an ultra-marathon. 

 

Facebook’s algorithm shows you more content from friends whose posts you have liked or read in the past. This process slowly funnels you into a bubble where you’re mostly reading the same opinions you already have, re-enforcing them in your mind. The goal of this approach is, of course, to keep you on the platform longer with views you agree with.The consequence, though, is that many harmful beliefs are cemented into the heads of users on the platform instead of being challenged. The more you think about the algorithms of social media, the more they start to seem like programs for creating social problems for the sake of profit. 

 

So if that’s the case, are all algorithms just evil piles of code that are determined to doom us all? Maybe, but maybe not. They do have extraordinary benefits to offer when used correctly. A dataset of 678 nuns from the  Nun Study, a research project started in 1986  on  the development of dementia  and Alzheimer’s  showed something very peculiar. Researchers tried to find if they could spot any patterns in the data to suggest a relationship between something in a person’s  early life and the onset of these diseases later in life, but to no avail. 

 

The team also had access to the letters that the nuns wrote decades prior when they were entering  into the sisterhood at around ages 19 and 20. An algorithm was able to detect with incredible accuracy through these letters  which nuns  would go on to have dementia in their elderly years. This is what algorithms are great at. Comparing data sets and figuring out tiny patterns that humans are more likely to miss. They’re sensitive to variations in data and finding patterns that lead to reliable predictions of possible outcomes.

 

Today, algorithms are used in detecting the likelihood of getting breast cancer and presenting  better models for tackling climate change. Except  the machine is not great on its own. Every potential positive here only works with a human behind it. Algorithms can act as the  first layer for screening breast cancer, but a human has to act as that necessary second layer to verify the results. Using an algorithm for determining an appropriate jail sentence might one day make sense only if there’s a human deciding whether the generated output is sensible or not. 

 

One of the main problems with the Eric Loomis case is that the judge didn’t question the algorithm's recommendation. He simply accepted the supposed objectivity of the machine and sent a man to prison for a crime that didn’t warrant it. As it stands now we just seem to be part of this enormous social experiment being run by tech gurus. And every year or so, another social experiment is added to the mix with its own unique set of social consequences. More recently we’re discovering what a rapid stream of bite-size videos does to teenagers or what a completely user-generated game does to tweens. 

 

So far, this video has been pretty hard on the big tech companies. But I think it’s also really important to acknowledge that they are trying to address some of these issues with algorithms. YouTube for example has changed its algorithm to include quality and authority as measures determining whether a video is recommended or not. Facebook has limited its targeting options to try to avoid another Cambridge Analytica scandal, where user data was distributed  without consent for political purposes. 

 

Are these adjustments to the algorithm helping? Yes, but not as much as necessary. Even more is the fact that these efforts point to two things. One is that human intervention in algorithms is not only necessary but needs a much stronger presence. Two is that tinkering with the algorithm is probably not going to resolve the consequences of their most significant bias: profit-seeking. Keeping people on a platform is always going to be easier with content that sparks the most outrage. That’s not always the case, of course. There is great content on YouTube and earnest viewers, like you watching this video right now. 

 

But for every creator seeking to share legitimate information, there seems to be several others blatantly exploiting the algorithm for a quick buck. How can we take these platforms back from them? The sad truth is, we can’t. The algorithms need to change. They need to put human welfare above profits. We need to stop designing machines that take advantage of our psychological weaknesses. To make that world possible, we need to be more critical of the algorithm. We need to dismantle the notion that the algorithm is all-knowing, objective and rational. The black boxes need to open up and our blind trust in these systems needs to be challenged at every turn. 

 

To paraphrase the co-founder of the Center for Humane Technology, Tristan Harris, we’re all looking out for the moment when technology would overpower human strength and intelligence, but there’s a much earlier moment when technology overwhelms human weaknesses. That point is being crossed right now, and it is reducing our attention spans, ruining our relationships, destroying our communities. It is downgrading humans.