Vanessa Piper, Principal Explorer
13 May, 2024
I recall the first time I was told about Dead Internet Theory. Sadly it wasn’t a passing comment in a whispered conspiratorial tone from a stranger in a tinfoil hat, it was a discussion between two relatively sane friends who worked in IT, and the year would have been around 2012.
I didn’t think much of the conversation, and shuffled it to one side to prevent it from taking up any actually required brain space. My reasoning for dismissal was that we were nowhere near Artificial General Intelligence, and those purporting to be using large scale bots and what counted as “AI” back then were terribly obvious about doing so.
The years since have given me some alternative perspective, based on hard won experience. I will write the world's shortest book about it some day, but in essence my grand life experience could be boiled down to the two following pieces of wisdom:
any decision you make that follows the thought “ah f*ck it”, will be a bad one.
any dystopian concept dismissed off-hand because “people will never fall for it”, will likely come to pass.
Dead Internet Theory was a talking point for some, and then disappeared back into the internet wastelands waiting to be dug up on rare occasions to victoriously underline a point - such as when the expansive use of bots was gradually unveiled across various social media platforms. “Look, edging ever closer to Dead Internet” we’d claim, safe in our assumptions that even knowing about the theory somehow made us smarter than the average bear and therefore impervious to bot influence.
According to the theory, by 2016 or so, the entire internet (or a vast majority of it) would consist of AI bots talking to AI bots, in all of our streams and feeds, manipulating and distracting us, removing slowing our ability for rational thinking and critical decision making, and rendering us widely susceptible to influence, in whatever form that influence took.
“Uh-huh”, said we, not convinced.
2016 rolled around, bringing with it not quite the expected internet filled with bots talking to bots and asking us to drink more Coca-Cola, but a few revelations that inspired a spike in Dead Internet popularity from the original believers and some new ones.
This came in the form of the Imperva Bot Traffic Report 2016 which claimed that 51.8% of all traffic on the internet came from bots, rather than humans. Adorably, it broke these down further into “good bots”, (e.g., bots for monitoring website health and availability, bots collecting information for search engines, bots that feed website content to mobile and web apps, and “authorised” commercial spiders extracting data for digital marketing tools) and “bad bots”, the bots assuming false identities to bypass security & partake in team-bonding DDoS activities; illicit scrapers, spammer bots and hacker tools.
Naturally we found these statistics a bit shocking, and hard to swallow. Hard on the heels of this came the revelations in the wake of the 2016 US Election results, confirming that there had been wide-scale Russian interference involved; troll farms with fake users spreading misinformation reached millions of social media users between 2013 and 2017, all with the aim of sabotaging Hillary Clinton’s campaign while significantly boosting Donald Trump's.
At this, I will admit, I did sit back a little and start reevaluating my prior stance of “the people will never fall for it”. Briefly, my sense of optimism settled on “aha, but the people will never stand for it, nor let it happen again!”.
This again was short lived when it became apparent that even with the abundance of proof, those created divisions were already too deeply set in place to be moved. The same citizens who would have been chanting “Better dead than Red” in the 1950s were now seeing Russian hacking and interference as just more proof of Trump’s cleverness, his ability to “think outside the box” and a sure sign that they were backing a winner.
We hadn’t hit “Dead Internet”, but it was clear we were in the age of Algorithmic Radicalisation.
Having grown accustomed to social media as the new way we felt “connected” to our tribe, many of us lost sight of the fact that we were the product, our freely given information monetised and fed back to us.
Those who probed the social media paradigms swiftly learned that we weren’t just being categorised and advertised to, our opinions were being targeted and influenced. Recommender algorithms on social media sites drove users progressively towards more extreme content over time, leading to users developing radicalised political and social views. In efforts to up the ante and keep users engaged and entertained, social media creates echo chambers where users are shown increasingly polarising media which confirms their bias as being a broad sweeping, accurate and popular viewpoint; and reinforces radical thinking through feelings of self-confirmation and tribalism.
As the internet was increasingly saturated with generated content designed to keep us clicking the next thumbnail, with bots designed to steer our opinions and make us feel we’re not alone in our increasingly polarised viewpoints, we were primed with disinformation, deep fakes and bias-filled echo-chambers.
We were ready, as a society, for our Dead Internet AI Overlords.
I’ll do myself a favour and skip past the deepening of the divisions and the normalisations of online hatred caused by the digital reaction to COVID, but we all saw it get bad.
And we haven’t seen it get better since.
Fast forward to 2022, and the marvel that is ChatGPT.
It could answer our questions with speed and understanding of nuance, without screeds of “sponsored” responses, aka Google.
We can ask questions like, “How do we tell a customer that we won’t be looking further into their case?” when we’re literally at the end of our tether, and get a response back that allows us to politely respond back with a “Thank you for sharing your feedback with us. I'm sorry to hear that your experience did not meet your expectations. Please note that this particular case has been resolved, and we will not be opening a new ticket. We wish you all the best. If you have any other concerns in the future, please don’t hesitate to reach out."
It’s genius. It’s time saving, toil reducing, genius.
We now speak fluent middle-management, we can throw adjective after flowery adjective at our self-performance assessments and make ourselves shine like glorious clumps of pyrite in the heartless glow of “urgently required for the AOP” flood lighting.
We can ask to understand cache invalidation, in the style of a Dr Seuss poem.
But oh, what to do when the data's astray?
When the cache doesn't match what the sources say?
This trouble in paradise, we must address,
It’s called cache invalidation, a process to assess.
Imagine you’ve a cookie, so fresh and so fine,
Stored in your jar, for you to dine.
But if the baker shouts, “Wait! That recipe was wrong!”
You’d toss it out, wouldn’t you? To eat would be wrong.
That’s what we do in the lands of the cache,
When old data lies there in a mismatched stash.
We clear it out, we make it void,
Ensuring new data is enjoyed.
“Out you go, old data, your time is done!”
We say with a cheer, for a new fetch has begun.
And in its place, fresh data we write,
So everything’s accurate, and everything’s right.
You’re welcome.
ChatGPT, and all the LLMs that followed, changed the game. There is no denying that.
The only element more profound was the millions of ears worldwide that suddenly pricked up simultaneously, bags packed in haste for the new Wild West, ready to stake their claims in the new AI gold rush.
Everyone wanted a piece.
Everyone had a great idea about how AI could be utilised in a brand new previously unconsidered way, to stamp our digital legacy on the world. And hopefully, to be able to pitch to a suitably perceptive venture capitalist and proceed to take over the world with the next Unicorn.
In a very short period of time, chatbots were everywhere. Facebook Messenger encourages me to “Ask Meta AI anything”, LinkedIn has moved from basic suggestions to writing sentences in Messaging, and recommending that I “Rewrite with AI” every time I post.
We’re there.
Dead Internet. It’s real, and it’s now.
Meanwhile, at the modern frontier of human connection; several days ago, Whitney Wolfe Herd, the founder and executive chair of Bumble (yes, the dating app) was speaking at the Bloomberg Tech Summit. She said they are considering how to incorporate AI into their app. She proposed a scenario where singles could “use AI dating concierges as stand-ins for themselves when reaching out to prospective partners online”.
Oh fantastic. That said, I concluded that despite the evidence, there is really only one way to know for sure if we’re officially at Dead Internet.
I had to ask ChatGPT.
Hey, GPT - you look pretty today. Anyway, I thought I’d take a quick break from asking you to explain concepts to me in the style of Dr Seuss, and ask: If we go through the key points defining “Dead Internet Theory”, in your opinion - are we there today?
Certainly, Ness! Here’s a rundown of how I see the current state of the internet in relation to the key points of the Dead Internet Theory:
Automated Content Creation: There's a significant amount of content created by algorithms and bots. From news articles generated by AI to social media posts and interactions being automated, this element is increasingly prevalent and overwhelming on some platforms.
Decrease in Human Interaction: While genuine human interactions still occur, there’s a notable increase in bot-generated comments and interactions, especially in customer service and on large platforms.
Corporate Control: A small number of large tech companies have substantial control over what content is seen and shared on the internet. Their algorithms determine the visibility and reach of content, shaping the information landscape.
Erosion of Privacy: This is widely observed. Many companies collect vast amounts of personal data for targeted advertising and content delivery, often with minimal transparency or user consent.
Lack of Transparency: The algorithms that control content and interactions are proprietary and not open to public scrutiny, leading to a significant lack of transparency.
Echo Chambers: Algorithms on social platforms tend to show users content that aligns with their previous interactions, which can reinforce existing beliefs and create echo chambers.
It seems that many aspects of the Dead Internet Theory are already manifesting to various extents. This trend raises concerns about the authenticity and health of online environments.
Absolute lack of excrement, Sherlock.