Technology, AI and ethics.

The Memetic Hate Machine


The Memetic Hate Machine

by Sarah Schlothauer

One of the greatest things about social media is its ability to quickly spread an idea.

Simultaneously, this is also one of the worst aspects of social media.

What makes a meme dangerous? Years ago, this question might have had a different answer: an infectious song, or an annoying image macro. However, as we see more and more, the concept of memetic influence is not confined to funny pictures of cats or goofy videos.

Not all alt-right memes are as obviously hateful as the anti-semitic Jewish merchant, which is based on long-held stereotypes and propaganda art. Many memes use comedy and are at first glance, fairly innocuous. While at first glance, jokes such as the NPC meme (which creates a caricature of a left-leaning, anti-Trump activist) seem more political than racist, they are used to parrot racist ideologies, hidden behind the packaging of edgy Internet humor.

White supremacy depends upon on its memetic influence in order to spread itself. It is not a metaphorical virus, it is the virus.

It uses the shareability of social media in order to spread itself.

This isn’t news to anyone who has been paying attention. YouTube has attempted to crack down on the large amount of conspiracy theory videos, meanwhile, its controversial algorithms continue to recommend hateful videos to users, even without prior engagement with similar material. Multi-media platform Reddit continues to fight the emergence of racist message boards, via banning and quarantining the material (often met with controversy, criticism, and accusation of censorship).

After a terrorist attack, one of the most dangerous things for the media to do is to spend time on the attacker. Giving the attacker a platform gives them notoriety. Do we know the name of the victims as well as we know the name of their killer?

With the March 2019 attack in New Zealand against Muslim people, this conversation is in full swing. The attacker’s manifesto spread in online spaces, despite attempts to limit its influence and shareability.

An argument against limiting the exposure of this material is that such limitation is censorship and that people should be able to read whatever material they please and decide its merits themselves. A counterargument is that the manifesto was written to incite further violence, and allowing it to spread will heighten the chance of copycat attacks and raise Islamophobia another notch. Hate speech and intentionally inciting violence and making threats is not covered under free speech laws, thus it does not cover the manifesto.

A major discussion point about the killer, who here remains nameless as he ought to, is that he used common, easily recognizable white supremacist memes in his manifesto. He planned for it to cause confusion and controversy, thus keeping open the discussion and allowing for cracks to form in narratives so that others may be indoctrinated in white supremacy.

When we discuss opinion silos relating to social media, we need to discuss racist silos as well. There is a multitude of forums solely for discussing racist ideas, white supremacy, homophobic and transphobic rhetoric, and sexist beliefs. They are not difficult to find and join. One of the largest online communities, voat, was formed under the guise of free speech, but quickly devolved into a haven for the alt-right. Some of the most popular communities on voat are devoted to sharing anti-Jewish memes and discussing white supremacy. Even seemingly unproblematic communities there, such as boards for movies or video games, are full of racism, usernames invoking Nazi imagery, and hate speech.

Twitter alternative, gab, maintains a similar model. Alt-right and racist users who have been banned from Twitter congregate on Gab to discuss material which would be deemed hate speech, and thus against Twitter’s terms and conditions. Gab made headlines after the 2018 attack on a Pittsburgh synagogue when it was revealed that the attacker shared anti-semitic memes on Gab.

These communities have their own memes and easily spreadable beliefs, not only to attempt to indoctrinate others but to be used as a dog whistle to other members of the same belief. Once these dog whistles become known to the public, you will tend to see articles and mainstream media recognition about them. There’s a reason we all know what a “Chad” and an “incel” are, or have all seen Pepe the frog used to parrot alt-right ideas – it is because mainstream media has picked up these memes and defined them for the broader public. The dog whistle is now in our spectrum of hearing and rendered down to a joke. However, it still spreads the notion that the meme originally intended to create.

How do we stop the memetic spread of alt-right ideals? Can we stop this?

One approach is to limit the effectiveness of spreadability and contain the disease. We should be critical of what we share with others online, be better about sourcing news before sharing it, and not give hate a platform to stand on. The old adage “don’t feed the trolls” comes to mind, suggesting that users do not interact with racist memes online. Instead, users should block the person and move on.

Reclamation of memes has also been happening in fits and starts. For instance, Pepe the frog’s creator Matt Furie killed off the cartoon frog in a 2017 comic in order to show that he was not okay with his creation used for vitriolic hate speech. Furie has also been involved in a court case against Infowars and Alex Jones in order to reclaim his copyright and regain control over his cartoon.

By spreading an idea, even in jest, we are giving it strength and shouting it to the outside world – we are spreading it virally. “Trolling” (the act of purposefully inciting arguments online for the purpose of amusement) should not be used for racist content. It is not an amusing ignition of Internet flame wars and amusing content. Spreading these memes affects real-life people. Every time someone shares a racist, anti-semitic, homophobic, or transphobic meme, they are telling those groups that are not safe.

In an age where information spreads at the speed of light, it is the duty of everyone on social media to prevent hate speech from gaining a foothold in order to counteract potential future hate crimes. There is no such thing as “ironically” sharing a racist meme; we must become more aware of the proliferation of racist content online and sequester off the virus to die in an oxygen-free vacuum.

Share Post
Sarah Schlothauer

Sarah Schlothauer is an editor for Conditio Humana. She received her Bachelor's degree from Monmouth University and is currently enrolled at Goethe University in Frankfurt, Germany where she is working on her Masters.

Show Comments