You probably think this algorithm is about you. It’s not.

From “brainrot” memes and incel slang to the trend of adding “-core” to different influencer aesthetics, the internet has ushered in an unprecedented linguistic upheaval. We’re entering an entirely new era of etymology, heralded by the invisible forces driving social media algorithms.

“I believe that there’s absolutely something equalizing about the fact that everything is run through the algorithm” – Adam Aleksic, Algospeak: How Social Media Is Transforming the Future of Language, 212.


I. You probably think the algorithm is about you.

Adam Aleksic demystifies “the algorithm” in his well-written and spirited book, Algospeak

Algologos typically takes two forms. The first is that the algorithm controls you. The second is that the algorithm is you. 

Aleksic offers a much more complex view of the digital world’s algorithmic rhythms. Reading his algorithmic prose, I realized: the algorithm doesn’t care about you.

II. The algorithm is the audience.

“Influencers” don’t care about you either. The algorithm is their audience.

Entering TikTok, for example, is like stepping into your imam’s or therapist’s office. If you want to be heard, you need to use the relevant spiritual terminology or share the details of your recent dream.

Some members of the digital religions or therapies (TikTok, Instagram, X) become “influencers” because they speak the language of the algorithm exceptionally well. That is how they get what they want: not you, virality.

III. Influencer. Wants a secret lover.

According to Aleksic, Reddit once published its algorithm. It looked like this: 

Translation: a Reddit post’s popularity does not determine its chances of going viral. Aleksic notes that “the only variables were u, the number of upvotes minus the number of downvotes, and a, the age of the post. Whatever the output was (here represented by s) determined how high a post would rank relative to other posts” (59). 

Nowadays, the relevant algorithm is unpublished; it is kept secret in the vault of “proprietary information.” Nonetheless, the algorithm remains the enforcer of any one platform’s “creative direction.” 

The only way to discover its contours is to give it what it wants. Influencers, as lovers of the algorithm, are our best sources for understanding the law(s) of the algorithm.

IV. Influencers obey (the algorithm).

Aleksic shares what influencers like him have learned about algospeak or speech “driven by the invisible forces behind social media and its algorithms” (7). What follows are what I take from Aleksic’s book to be the “laws” of algospeak:

Law 1. The line between offline and online is very blurry. Online communities are clearly formed by people who exist offline. Over time, these communities develop their own in-group language. Some words, like “unalive,” emerge in unique offline settings. Words like unalive and gyat are popularized by being taken up and disseminated to the farthest reaches of the online world by the algorithm. Finally, these words return to the offline world–but now as common language.

Law 2. The boundaries between social media platforms are also very porous. Viral TikTok videos, for example, often appear on Instagram, YouTube, and other platforms. 

Law 3. Algorithmic power is productive. The algorithm normalizes its grammar by establishing a zone of exclusion. The word “unalive” is a perfect example of productive algorithmic power.  

The word unalive originated offline. It became TikTok algospeak because the platform banned certain “sensitive words,” including, it seems, speech about killing, death, and suicide. 

Unalive was used by TikTok users to bypass the censoring algorithm, allowing them to discuss political violence or mental illness. The word has become very popular among middle school students in the U.S. Its offline use is the subject of ongoing controversy.

Using evasive language (e.g., referring to Trump as “cheeto”) is called “Voldemorting.” The use of evasive language happens across languages (see chapter 8 of Algospeak). 

Bowdlerization is another technique used to bypass censorship. “The practice of respelling offensive [words] is a centuries-old tradition known as bowdlerization,” writes Aleksic, “named for the Englishman Thomas Bowdler, who is mainly remembered for publishing some egregiously family-safe edits of William Shakespeare’s plays” (17-18).

See words like “seggs” (sex), fuk, fucc, f*ck, fk (fuck), a@@, ahh, gyat (ass, butt), and f*aggot. 

See also evasive art/emojis like 💅🏻 (for “zesty” or gay), 🥷🏾 (for the n-word), 🍉 (for Palestine), 🍆 (for dick), 🍑 (for ass and pussy), and, just for a trending moment, 🪑—but more commonly, 💀(for laughing [to death]). 

Law 4. The algorithm favors what is most likely to boost user engagement. To go viral, you must show the value of your post or its ability to capture users’ attention, earn their likes, stimulate comments, and keep them on the platform as long as possible. 

There are several ways to prove your post’s worth and get past the algorithmic gatekeeper to achieve viral fame. 

Your post is likely to be recommended if it (a) “complies” with current language trends (including using English with a proper accent, like a British accent), (b) is neither too short nor too long, (c) uses trending keywords (gyat, rizzler, sigma), often words created to bypass language restrictions (unalive and seggs), (d) piggybacks on other trending posts (e.g., making fun of viral trends or including a viral musical track in your video) and even if your post is completely unrelated to the trend, (e) evokes strong emotions (passion, anger, sadness) or curiosity, (f) is extreme(ly weird), and (g) features fast-paced talk or noise. 

It’s too good, perplexing, and funny not to mention a specific example of criteria (c) above, the “Rizzler song.” Aleksic notes that the “Rizzler song” is a “TikTok audio that went massively viral in late 2023 for its slang-heavy lyrics: Sticking out your gyat for the rizzler / You’re so skibidi / You’re so fanum tax / I just wanna be your sigma / Freaking come here / Give me your ohio” (44). 

The “Rizzler song” is one example of what is called “brain rot.” Here is another one, a jumble of “keywords.”

“Social media platforms reward using keywords,” Aleksic writes, “because they want the information: Metadata can be turned into index terms that are easier for the algorithm to categorize, and thus know what to recommend to viewers.”

Aleksic admits that “[c]reators want their content to be discoverable, so they mold it around what the algorithm wants. Keywords are a win-win” (46, emphasis added). 

Law 5. Morality is not a variable of the algorithm. As Aleksic points out, evasive language is often trending. In their specific contexts, evasive words, memes, and videos are created to help users communicate their experiences of social oppression. When taken out of their original contexts, the same language can be used to spread social oppression. 

Aleksic argues that Poe’s law explains the spread of dangerous incel ideology online. Poe’s law is

 [a]ny sarcastic expression of extreme views can be mistaken for a sincere expression of those views, and vice versa. Poe’s law explains how dangerous ideas spread as memes. . . . [It] has created a dangerous game of hopscotch. We’re jumping between irony and reality, but we’re not always sure where those lines are. Interpreting words comedically helps the algorithm spread them as memes and trends, but then interpreting them seriously manifests their negative effects (138-139).

Memes spread because they blur the line between serious and unserious. They also show which group(s) are more often socially labeled as unserious. 

Consider the now popular word, gyat. “The word ‘gyat,’” Aleksic writes, “reached social media as a funny word for ‘butt,’ but it actually comes from an exaggerated [African American English] pronunciation of ‘goddamn. . .’” (153). The word’s origins and purpose got lost or erased as it was pushed by the algorithm.

According to Aleksic, 

Studies have shown that non-Black people are disproportionately likely to use reaction GIFs and images containing Black people, because they find those memes funnier. If you’ve ever been sent the “Crying Michael Jordan” or “Michael Jackson Eating Popcorn,” those subtly play into racial stereotypes by using Black reactions as an exaggerated response. This phenomenon is called digital blackface, and it’s very present in the social media age (155, emphasis original).

See law 1 above: the offline is always-already online (and vice versa). Popularity is the same ole normal.

V. The algorithm is designed to kill you.

One lesson you can learn from the above “laws” is that the algorithm is designed to kill you.

Aleksic notes that “social media algorithms are best at recommending personalized videos when we give them information. Since that translates to social media success, we create metadata simply because the algorithm wants metadata in order to push our content to others” (164).

The algorithm polices content. Influencers supply compliant content. You watch it, and like it, and like it, and like it–and sometimes comment on it. The algorithm pushes and pushes and pushes more like content for you to like. The platform profits.

Aleksic explains it this way: “Algorithms are the culprits, influencers are the accomplices, language is the weapon, and you, dear reader, are the victim” (78).

As if under the spell of the White Witch (remember The Chronicles of Narnia?), you, dear reader, consume the abundant images provided by the algorithm until you collapse under the weight of your own skin.

You “unalive” yourself with social media satisfaction.

VI. The algorithm is designed to kill you with happiness.  

Happiness is deadly. The AI of The Matrix understood, according to Mr. Smith, that humans require frustration to advance, to stay alive.

In Civilization and Its Discontents, the founder of psychoanalysis, Sigmund Freud, makes a similar point. “The price we pay for our advanced civilization,” Freud states, “is a loss of happiness” (Standard Edition, 21:134). Misery or frustration is an inherent component of our advancement.

The online world, like the offline homes we confine ourselves in to avoid “stranger danger,” eliminates frustration. It does so by always giving us more of what (we tell it) we want.

The price we pay for online satisfaction is a glut of happiness. The algorithm’s acidic power achieves this.

The algorithm burns away all flesh, anything that may disturb you and dislodge you from, say, TikTok. The following offline example perfectly mirrors the corrosive power of algorithmic laws, amply illustrating the cost of your virtual happiness.

In R.A.V. v. St. Paul (1992), a case Judith Butler discusses in Excitable Speech (1997), the Supreme Court ruled that banning cross-burning was unconstitutional. The case involved a white man burning a cross on a Black family’s front yard

Butler notes that the Court justified its decision by ignoring the fact that the cross was burned on a Black family’s property. “The stripping of blackness and family from the figure of the complainant . . . refuses the dimension of social power,” Butler writes, “that constructs the so-called speaker [i.e., cross burner] and the addressee of the speech act in question.” Moreover, Butler continues, “it refuses as well the racist history of the convention of cross-burning by the Ku Klux Klan” (55).

Similarly, words, memes, and videos go viral online by stripping away all specificity (even the accent of the creator). The algorithm does not reward context. Specificity or difference is too unpopular or frustrating.

VII. The algorithm “liberates” the ego from social life.

You are frustrating, according to Freud. Here is how I understand the compelling story Freud tells about your psyche:

Your psyche is made up of three “agencies.” The Id, the ego, and the superego.

The superego is that part of the psyche akin to a parent or the pope. It judges the ego for failing to live up to its ideals.

“The ego,” Freud argues, “is first and foremost a bodily ego; it is not merely a surface entity, but is itself a projection of a surface” (The Ego and the IdStandard Edition, 19:26).

Your ego is like your skin. It is on the frontline of your satisfaction. Like the algorithm, it seeks to exclude what is disturbing, frustrating, or different from recognition.

The ego and the superego are “dipped” in the Id, the sphere of exiled uncivilized/unconscious desire. Uncivilized desire resists exile.

One way naughty desire resists its exclusion is in and through the dreamwork. In dreams, uncivilized desire is reinterpreted, reimagined, and recontextualized in ways designed to escape the watchful ego.

We wake up from our dreams when the process of redescription fails. We get woke when the censor “detranslates,” recognizing the disturbing thought, the wolf, in sheep’s clothing. 

The ego’s project of satisfaction is frustrated by the Id and the superego. On the one hand, it is beset by disturbing desire (the Id). On the other hand, the ego is bullied by the judging superego.

The psyche is you, and your internal life mirrors your social life. You are caught between antisocial desires and obedience to the (moral) law. If you are lucky, you learn to live with your frustration.

The algorithm, much like the homes our parents began fearing us into back in the 1980s (see Jonathan Haidt’s book, The Anxious Generation, for more on “safetyism”), is akin to oxycotton. On the algorithm, the ego is liberated from the pressures of social life or life in the presence of others.

VIII. The algorithm wants to unlive you, but misery loves company.

Recently, I attended a lecture by Adam Phillips at the Chicago Psychoanalytic Institute. At one point in his lecture, Phillips made a simple observation. He noted that when you walk into your analyst’s office, you walk into a language.

I was reading Algospeak, a book I heard about on the podcast Offline with Jon Favreau, just before attending the lecture. So, I think that is why Phillips’s obviously correct comment struck and stuck with me.

I heard him say: When you walk into your analyst’s office, you are walking into an algorithm.

After explaining what I had heard to him, I asked him a question. If Adam Phillips were on TikTok, what would he say to grab our attention about our wanting or desiring?

Phillips’s response to my query was something like this: People do lose attention. That’s inevitable. When I talk to myself, that’s when people want to listen.

I understood Phillips to mean that when we don’t comply with the algorithm, other people want to listen to us. In other words, you are what other people find most interesting.

To be sure, you won’t go viral by talking to yourself online. The grace of unpopularity is the feeling of an enlivening misery.


Discover more from Gay Thoughts

Subscribe to get the latest posts sent to your email.