Despite the role it may have played in the horrific events in Buffalo, the platform and its owner have made no statement. Links to copies of the graphic shooting video and praise for the shooter continue to appear on the platform. This inaction reveals a complicated truth about the internet landscape: an online platform that rejects outside criticism from users and advertisers can harbor racist hate speech and facilitate user radicalization with little consequence.
In a 180-page document allegedly written by the alleged murder suspect, he said he started visiting the online forum site 4chan in 2020, drawing inspiration from racist and hateful discussions and forums about weapons. He also appears to have hinted at his plans on 4chan, according to an online diary that has been attributed to the suspect.
4chan did not respond to repeated requests for comment from CNN Business. A direct inquiry sent to current 4chan owner Hiroyuki Nishimura also went unanswered.
The site – a rudimentary forum-based site reminiscent of the early days of the internet where users post anonymously – hosts a variety of communities where hate speech is tolerated or celebrated. While major platforms like Facebook and Twitter have multi-faceted terms of service agreements for users that spell out prohibited behaviors like hate speech, harassment, racist speech, etc., 4chan has bucked the trend. social platforms to adopt increasingly robust content moderation policies.
Instead, it exists outside the norms of traditional social media. It’s a place where some users discuss daily anime and video game news, but it’s also a forum where harmful content that wouldn’t be allowed on more mainstream social media platforms has flourished. It’s where nude photos of female celebrities were once leaked and circulated, where racism and anti-Semitism are acclaimed, and where QAnon, the conspiracy cult, was born.
The site lists a series of rules and warns users that “if we reasonably believe that you have not followed these rules, we may (in our sole discretion) terminate your access to the site”. But it’s unclear whether or how the rules – which prohibit, for example, posting personal information or sharing content that violates US law – are enforced. In some cases, they seem to be ignored; for example, they state that racist posts are only allowed on a certain thread, but rampant racism is easily found throughout the site.
Immediately after Saturday’s shooting, some of those same forums on 4chan were used to help spread the video of the shooter – which otherwise might have only been viewed by the roughly 20 people who watched the live stream. live before it was removed by game streaming site Twitch – writings allegedly attributed to him. Days later, they remain online and, in some cases, continue to praise the shooter or support the conspiracy theories that appear to have motivated him. Links to copies of the graphic video in which the shooter shoots innocent customers and his alleged writing continued to appear on the site. Other similar sites like Gab and Kiwi Farms were also used in the aftermath of the attack to distribute video of the shooting and the alleged shooter’s writings, according to online extremism researcher
Ben Decker. In an unsigned email to Kiwi Farms in response to CNN’s request for comment, the site said it considers the video “safe to host” after its initial broadcast on Twitch. (Twitch says it removed the video from its site within two minutes of the attack beginning.) Gab did not respond to a request for comment.
In the wake of the Buffalo shooting, many of the major social media platforms “went to great lengths” to quickly remove content related to the attack, “but there is a real problem, and that is that there is there are platforms that are kind of recalcitrant that mess it up for everyone,” said Tim Squirrell, communications manager at the Institute for Strategic Dialogue think tank. “The consequence of that is that you can never complete the game from Whack-a-mole. There will always be somewhere going around [this content],” he said.
Squirrell added that these platforms’ opposition to removing or moderating content is why footage from the 2019 racist mass shooting in Christchurch, New Zealand “is still available even now, three years later, because you can never stop them all”. In the document allegedly written by the alleged Buffalo shooter, he describes being radicalized by the live broadcast of that 2019 shooting.
The limits of the law
4chan was started in 2003 – a year before Facebook was launched – by a 15-year-old as an online message board allowing users to post messages anonymously, then sold to Nishimura. Like more mainstream platforms, 4chan is populated with “user-generated content”. In the United States, platforms that rely on user-generated content are legally protected from liability for the vast majority of what their users post by a law called Section 230, which largely protects media companies. from any responsibility for the content published on their platforms.
Despite this legal protection, many Big Tech platforms have stepped up efforts in recent years to moderate and remove certain harmful content – including hate speech and conspiracy theories – in response to pressure from advertisers and as they seek to maintain a large user base and an attempt to stay in the good books of lawmakers.
While Big Tech platforms remain far from perfect, these pressures have led to progress. In 2020, for example, Facebook faced a major pressure campaign from dozens of advertisers called #StopHateForProfit for its decision not to take action against then-President Donald Trump’s inflammatory posts. Within days, Facebook CEO Mark Zuckerberg made new pledges to ban hateful ads and label controversial posts by politicians. Many major social media platforms also evolved their policies on misinformation in response to calls from lawmakers and public health officials at the start of the Covid-19 pandemic.
But for sites like 4chan, which do not rely on mainstream advertisers and seek to host banned content on other platforms, rather than platforms widely adopted by many users, there is little incentive to remove harmful or dangerous content. In an email to CNN in 2016, 4chan owner Nishimura said he “personally [doesn’t] like sexists and racists… [but] If I like[d] censorship, I would have already [done] that.”
An extreme intervention with historical precedent would be a move by internet infrastructure companies that allow sites like 4chan to exist. A similar site called 8chan, which spun off from 4chan several years ago, has struggled to stay online since internet infrastructure company Cloudflare stopped supporting it in 2019 after authorities believed that it was used by the alleged shooter in the El Paso Walmart shooting to publish white nationalist writings.
4chan is “intentionally sort of this censorship-free platform, but they have cloud providers and other [internet service providers] they rely on to exist,” said Decker, who is also CEO of digital investigative consultancy Memtica. In theory, these ISPs could say, “We won’t allow this content anywhere, on any entity that uses our technology,” which could force 4chan and similar sites to implement stricter moderation practices.
Yet even this is not a surefire way to master such platforms. As the ranks of online platforms dedicated to supporting “free speech” at all costs have grown, internet service providers espousing similar views have also emerged.
A recent example: Parler, the alternative social media platform popular with conservatives, briefly disappeared from the internet in early 2021 after being booted from Amazon’s cloud service because it was heavily used by supporters of then US President Donald Trump, some of whom participated. during the Capitol Riot of January 6. But weeks later, Parler resurfaced online with the help of a small web hosting company called SkySilk, whose chief executive told The New York Times it helps support free speech.