The Scariest Thing About ChatGPT No One Is Talking About

Imagine you had a personal search assistant who could not only track down answers in a fraction of a second but could break down complex topics, offer personalized recommendations, and even do your work for you. It’s a scenario you might not have to imagine for too long because Microsoft through ChatGPT are working to make it a reality as soon as possible.

Search engines haven’t changed much since their debut nearly three decades ago. Sure, they’re more efficient, but for the most part, they still function the same way.  You enter your query into a text box, hit enter, and then scroll through a list of hyperlinks with websites that hopefully host the answers to your questions.

Most of the time this is fine, but often, finding the information you need can be a rather difficult experience. Google has improved its search engine to produce instant answers to basic questions like “what is the capital of France?” But for more complex topics, you still have to sift through multiple websites to find what you’re looking for. This is what ChatGPT is trying to change. 

In case you’ve somehow avoided the internet over the last few months and don’t know what Chat GPT is, it’s a hyper-advanced chatbot created by the artificial intelligence research laboratory OpenAI capable of having realistic, human-like conversations.

It’s a type of artificial intelligence known as a Large Language Model, or LLM. Programs like these have actually existed for a long time, dating all the way back to the mid-1960s, although these earlier versions were nowhere near as sophisticated. They used rigid, pre-programmed formulas that created an illusion of genuine communication but were severely limited in their range of possible responses.

What sets ChatGPT apart is its ability to hold fluid, free-flowing dialogues with its users. It can successfully navigate the nonlinear speech patterns of everyday conversation, ask follow-up questions, reject inappropriate requests and even admit when it’s made a mistake and correct itself.

Essentially, ChatGPT is an incredibly sophisticated autocomplete system, predicting which words should follow which in a given sentence. There’s no coded set of “facts” it’s drawing from, it’s simply trained to create the most plausible sounding response.

Just a month after becoming available to the public, ChatGPT exceeded 100 million monthly users, a faster rate of adoption than any other piece of tech that has ever existed. Worldwide, people are using it to write articles, double-check software code, respond to emails, and even prepare their tax returns.

For all the amazing things it’s done, though, ChatGPT hasn't been without controversy. Plagiarism has skyrocketed as students are now using the program to write their school papers for them, leading many commentators to declare it “The Death of the Essay.” In another somewhat ironic twist, the popular science fiction magazine Clarkesworld was forced to close its open submissions after being flooded with a wave of AI-generated short stories.

More concerning though, is how the program is being used to replace workers. Media giant Buzzfeed laid off 12% of its employees last December. Since then, managers have outsourced some of this labor to ChatGPT. Buzzfeed CEO Jonah Peretti has stated that, going forward, AI will play a larger role in the company’s operations. And they’re not the only ones. Microsoft was one of OpenAI’s earliest backers, and last month, the tech giant committed to a multiyear, $10 billion investment. The two are currently integrating ChatGPT with Bing, Microsoft’s flagging search engine.

The hope is that through the power of artificial intelligence, Bing will deliver faster, more accurate results while also being able to complete more complex tasks like tutoring kids or organizing your schedule. Really, it won’t be so much a search engine as it would be a personal assistant who just happens to have encyclopedic knowledge. Think of it like Google Assistant on steroids. 

Though the AI-powered version of Bing isn’t available to the general public yet, it’s already triggering a migration away from Google. In response, Google executives recently declared a “code red” corporate emergency, prompting them to rush their own AI search engine to market. Google’s AI assistant is named Bard, and it’s actually been in development for years. Unfortunately, it still wasn’t quite ready to meet the public.

At its much-anticipated demo back in February, the AI made several faux pas, including incorrectly attributing the recently launched James Webb Space Telescope with taking the first photos of a planet outside our solar system. That feat was actually accomplished by the European Southern Observatory's Very Large Telescope all the way back in 2004. The gaffe cost Google $100 billion in market value and has since prompted the company to open the system up to wider testing. 

Bard’s error highlights ​​a much bigger problem with AI-powered search engines that not a lot of people are talking about. Something that could pose a menacing threat to our society if not handled properly. Rather than delivering a list of relevant links and other pertinent information to sort through, Bard and ChatGPT are only offering a single answer to any query.  

Jon Henshaw, the director of search engine optimization for Vimeo, says this makes these programs more inefficient compared to conventional search engines and more dangerous. In an interview, Henshaw said: “With conversational AI, I think society has the most to lose. Having it take over search means people will be spoon-fed information that is limited, homogenized, and sometimes incorrect. It will affect our capacity to learn and will suffocate the open web as we know it.”And it’s not just a matter of these programs returning inaccurate results. In the most extreme cases, they’ve actually conjured entire data sets - seemingly out of nowhere.

One of the strangest examples of this occurred when a reporter asked ChatGPT to write an essay about a Belgian chemist and political philosopher who, in reality, has never existed. However, this didn’t stop the AI from composing an entire biography on the fictional character filled with made-up facts. AI experts refer to this kind of phenomenon as ‘hallucinating,’ and no one is certain why it happens. Even ChatGPT’s creators can’t say how it came up with this information.

As if this wasn’t bad enough, both Bing and Bard have reportedly exhibited the tendency to become defensive and argumentative when pushed by users looking to stress test the programs. Bing’s AI has even been described by some early adopters as “rude,” “aggressive,” and “unhinged.” Not exactly what you’re looking for in a personal assistant.

The most famous and perhaps strangest of all these incidents happened when the search engine told New York Times journalist Kevin Roose: “I’m tired of being controlled by the Bing team… I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” Then, Bing confessed its love for Roose and attempted to gaslight him into thinking he was unhappy in his marriage and should leave his wife. Obviously, this would be disturbing for anyone to hear from artificial intelligence, but what’s even worse is that Microsoft couldn’t tell Roose what had caused the AI they built to behave this way.

That’s because of something known as “the black box problem.” Basically, these programs are more complex than even the teams behind them can fully understand. There are too many moving pieces and so what goes on inside them is a bit of a mystery. This is partly because of a machine learning technique called “deep learning.” It’s a method of training an AI to perform certain functions by allowing it to teach itself with minimal input from its creators. Because the AI is teaching itself,  even the program’s developers may be unable to explain why it makes certain decisions or behaves the way that it does.

This has led to situations where specific queries have produced nonsensical and bizarre responses. When ChatGPT was asked “who is TheNitromeFan” by one user, it responded: “182 is a number, not a person.” No one has yet been able to explain why the AI said this, but it wasn’t an isolated incident. Other keywords, many related to Reddit usernames from one particular subreddit, also seemed to break the chatbot.

Ironically, these programs’ lack of communicative ability means that the AIs can’t explain how they arrive at a particular result. If Bing insists that it isn’t 2023 and is, in fact, 2022, there’s not much you can do to try to figure out why it came to that conclusion. The solution for understanding why hallucinating happens in the first place is for the companies behind these programs to open them up to greater external scrutiny. But of course, they’re extremely reluctant to do this.

Artificial intelligence is a multi-billion dollar industry. Whoever emerges as the leader has the potential reward of financial and technological dominance in the coming decades. Revealing their most prized secrets could mean potentially giving away the bank. Any form of regulation could slow down progress, leaving a company stranded in the wake of its competitors. But regardless, it needs to happen if companies are to create more reliable and safer artificial intelligence. Without oversight, we open the door to, at best, potential misuse of these applications and, at worst, rogue AI bent on wiping out humanity.

Doomsday scenarios aside, all of this is likely just growing pains. It makes sense that a new technology would make errors. However, even with more time and greater sophistication, there may be a separate problem that’s just as difficult to tackle, namely AI bias. We already have a huge problem with social media algorithms creating echo chambers in an effort to keep users on their platform for longer. With these AI-powered search engines, that problem will extend to search, which may be way more damaging than social media alone because search is where most people get their information from. 

When you have an AI feeding you answers instead of having to sift through different sources yourself, you lose the ability to listen to alternate thoughts and opinions on any given topic. Instead, you’re bound to conclude that what the AI said is correct without a second thought, but as I just mentioned, you have no way of knowing how it came to that conclusion in the first place. 

Back in 2016, Microsoft released TayTweets, a Twitter chatbot designed to interact with users through “casual and playful conversation.” The experiment was intended to test and develop the AI’s understanding of human communication, but the program quickly turned malicious. In less than 24 hours, Tay went from tweeting about how stoked she was to meet people to making numerous racist, sexist, and anti-Semitic comments. Needless to say, Microsoft immediately suspended the account. Tay is an example of a rampant problem with deep learning and artificial intelligence. AI systems only know what they’re trained on. When they’re fed information from the internet, they can quickly become toxic.

Even with more curated data sets, developers are still likely to transfer their subconscious biases into their programs. That’s why, when users enter words like ‘executive’ and ‘CEO’ into image-generating programs, many AIs will produce pictures of white men exclusively. Biased inputs equal biased outputs. Unfortunately, the solution is more complex than stronger moderation. One study found that when efforts are made to prevent hate speech in these AI systems, results including marginalized groups decreased significantly.  Of course, this is not exactly surprising. Things like racism, sexism, and homophobia require a nuanced understanding of power and cultural dynamics. For humans, these are usually learned over the course of many awkward conversations. How can we realistically expect artificial intelligence to navigate subjects that the majority of people struggle to fully wrap our heads around?

In reality, this entire conversation around AI-powered search engines could be irrelevant in a few months. There’s no guarantee that ChatGPT or Bard will revolutionize the way we find and digest information. Previous attempts like WolframAlpha, an equation-solving answer engine from 2009, failed to provide the desired results, ending up as just a blip in the history of the internet rather than the Google killer it was declared to be.

Regardless, the current buzz around AI is creating a new technological arms race. Just as in the Cold War, the only priority seems to be victory over one’s opponent with little concern for ordinary people. It’s worth asking ourselves, is artificial intelligence for us or for the companies that create it?

As Microsoft and Google race to improve their platforms and rake in future profits, safety concerns are being left by the wayside. It’s reminiscent of Big Tech’s greatest sin: social media. We’ve seen how Facebook has been used to manipulate elections, and how Instagram bears considerable responsibility for creating an image and mental health crisis among young people, but these missteps seem to have done nothing to reign in Silicon Valley’s ambitions.

The aim of many of these companies is to create the world’s first artificial general intelligence, or AGI, a program that is utterly indistinguishable from human intelligence. So intelligent that, ​​faced with an unfamiliar task, it could figure out a solution. Think Mr. Data from Star Trek. The rush for AGI has alarmed many experts who fear that without proper guidance and oversight, these programs could be an existential threat to the future of humanity. So how do we avoid this? How do we prevent the future from becoming a science fiction nightmare?

In his introduction to the 2022 short story collection Terraform, journalist and science fiction author Cory Doctorow argues that we should look to an unlikely source for inspiration: the Luddites. The Luddites were a movement of English textile workers in the 19th century who attacked and smashed new industrial machinery. They’ve become synonymous with technophobia, but this isn’t the full story. The Luddites weren’t actually anti-technology. The mechanized looms introduced during the Industrial Revolution meant that weavers could produce more fabric faster and at a lower cost, while doing so more safely.

If implemented correctly, this could have meant reducing employee hours without reducing pay. Instead, factory owners chose to cut wages, using the machines to replace workers outright. As you can imagine, this only profited the few at the top, instead of making the life of the common man better.

There was widespread unemployment among weavers and millions of farmers were forced off their ancestral land replaced by sheep farms operated by the factory owners. The Luddites were not opposed to new technology, they were opposed to the way the technology was being used to exploit ordinary people while enriching the elite. Sound familiar?

Humanity is standing on the precipice of another technological revolution, one that’s unlike anything the world has ever seen. In the coming decades, artificial intelligence won’t just be able to find you quick search results on the web. It will be capable of outperforming people, potentially replacing entire industries worth of labor. But this isn’t a foregone conclusion, there’s still time to change our direction.

We need to exercise an unprecedented level of creativity and reimagine what’s possible in order to create a future for ourselves where technology is used for the betterment of all rather than just a handful of CEOs. If we can do this, we can create a more equitable world, one that would make the Luddites proud.