Choice of content

Chris Reads
5 min readFeb 29, 2024

There has been a lot of discussion about digital content algorithms with regards to echo chambers and radicalization: they’ll show content that the viewer chooses to engage with, driving them further and further down the digital rabbit hole until they have issues differentiating fantasy from reality. This is an unfortunate but inevitable consequence of capitalism exploiting the internet as a blue ocean of economic opportunity. But there has been little examination about what this does to regular people, the innocuous algorithms that control their content, and the disappearance of Internet 1.0 serendipity which fostered breadth of thought and the interaction with people outside their spheres. Instead, content algorithms now dictate worldview and actually limit the emergence of internet-wide phenomena, the consequences of which are still not fully known.

Primary discussion around content algorithms surrounds the creation of cultural and political silos which are exposed to one-sided content. Sensationalism and outrage draw views, thus nuanced perspectives are eschewed in favour of impassioned monologues and easily deconstructed strawmen. Enough regular exposure to this content shifts viewers to an endstate of maximum consumption filled with conspiracy theories and nonsensical economics. Some have declared this a real and widespread way in which rudimentary AIs have harmed humanity. But even for casual content consumers, the internet of today is unrecognizable from the one accessed from a web browser. The internet is now accessed through portals owned completely by companies who do their best to keep users within their platforms. No longer is someone exploring the internet, surfing the web. Instead they are playing within walled gardens of user generated content.

This gives these companies free reign to control the content that their users see. It’s no longer as simple as browsing a forum and sorting threads by new, or entering a news website and reading headlines based on the how big the typeface is. Users are being tracked and logged, and the content that they will be shown is some mix of what is most likely to keep them on the platform, interact with the content, send it outside of the platform, or generate ad revenue. The Meta algorithms show not what is published most recently from your friends, but what is most likely to keep the user engaged. Even Reddit, which masquerades as an internet newsboard of sorts by “Hot” by default, presumably a mix of time since publication and interaction, but also certainly a mix of what will keep that individual user most beholden to the website. I doubt the default sorts for any ecommerce is different. YouTube will show recommended videos that are most likely to drive engagement and watchtime, but TikTok doesn’t even recommend videos on its homepage. There is no homepage; the user opens the application and they are immediately exposed to content, the watchtime and interaction with which further motors the algorithm along.

One effect I’ve noticed is the disappearance, or at least reduction of internet-wide phenomena. Though there will be content and memes that inevitably cross into multiple silos, the days of popular Vine videos and image macro jokes are over. It is increasingly hard for something to captivate the consciousness of the entire internet. Instead, each corner of the internet has their own memes, incomprehensible to anyone else after a few days of mutation. Sure, there are topical news items and Pookies that capture everyone’s attention for a moment, but nothing sustainable. Jokes about varied aircraft incidents and celebrity faux pas last only so long as the next political gaffe. Memes like “sigma male” would not be understood by those who make “babygirl” memes, who then would see no humour in “loss”. But Hide the Pain Harold and Advice Animals are gone forever.

Even more sinister still is that those in the echo chambers seem to be unaware of the extent that they reside within them. Of course, that’s how the QAnon swallows the lives of adherents, but it similarly afflicts people with less sinister belief systems. Everyone gets their own community in this brave new world wide web, connecting hobbyists and fans everywhere.

When a video with a fringe view or a joke requiring specific knowledge is shown to those communities, it can quickly garner thousands if not millions of views, clicks, comments, and shares, all of these which are listed under whichever genre of content it is. Thus, the viewer sees a high level of interaction with the posts that they see, and begin to think that it’s representative of the entirety of the internet, whereas it’s truly just a tiny slice of the pie. This experience can be distilled into someone viewing material on the internet and then saying “everyone is saying X”, or “people believe Y”. The nuance between a couple piece of content that thousands of people have interacted with, and thousands of people wholeheartedly believing the same thing is lost.

Is the endstate of these algorithms bad? Frankly, yes. In more extreme cases, it gives bad actors amplified platforms and it gives rise to the cultists that follow them. In the corners of the internet like BookTok for example, it doesn’t cause explicit harm to the users so long as they remain cognisant that what they’re seeing doesn’t represent the majority of the internet, much less the majority of society or the world. However, it still creates an environment where content consumers are trapped by increasingly addictive content. And on an internet where content is created by users and bots, this becomes a feedback loop wherein content begets more content: comments, response videos, and independent posts.

Unfortunately, I think there is nothing that can be done about this: even if it was possible to somehow regulate a “freedom of choice” law for content, I don’t think I can stand to see one more tech executive appear before octogenarian politicians. Personally, I try to access unbiased content: I use YouTube, Reddit and other platforms from browsers without search history, cookies, or trackers enabled. I browse minimal content on the platforms which require a login, and even conduct all my websearches with DuckDuckGo. Of course, I know this is largely futile in terms of my digital profile and advertiser tracking: did you know websites track you across devices and logins through various means including browser, system settings, and even battery life?

The important thing is to remain aware that for the most part, one doesn’t interact with people online, they interact with the algorithm. The algorithm is what determines what is viewed and even what is desirable: by showing content immediately following content that is extremely interesting in the same category, the brain can be tricked into disliking the later. Their likes and comments are perhaps seen by people, but more importantly read by lines of programming which then decides what to show next. It’s challenging since the end users don’t speak the same language as the algorithm, but like cross-species communication in movies, it surely happens: one creature shares their viewtimes, swipes, and reactions, while the other silently pushes more content and waits for reactions. If you don’t welcome your new algorithmic overlords, it’s good to be aware of their presence at least.

--

--