fbpx

A lawsuit against Google seeks to hold tech giants and online media platforms liable for their algorithms’ recommendations of third-party content in the name of combating terrorism. A victory against Google wouldn’t make us safer, but it could drastically undermine the very functioning of the internet itself.

The Supreme Court case is Gonzalez v. Google. The Gonzalez family is related to Nohemi Gonzalez, an American tragically killed in a terror attack by ISIS. They are suing Google, YouTube’s parent company, for not doing enough to block ISIS from using its website to host recruitment videos while recommending such content to users via automated algorithms. They rely on antiterrorism laws allowing damages to be claimed from “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”

If this seems like a stretch, that’s because it is. It’s unclear whether videos hosted on YouTube directly led to any terror attack or whether any other influences were primarily responsible for radicalizing the perpetrators. Google already has policies against terrorist content and employs a moderation team to identify and remove it, although the process isn’t always immediate. Automated recommendations typically work by suggesting content similar to what users have viewed since it’s most likely to be interesting and relevant to them on a website that hosts millions of videos. 

Platforms are also shielded from liability for what their users post and are even permitted to engage in good-faith moderation, curation and filtration of third-party content without being branded publishers of it. This is thanks to Section 230, the law that has allowed for the rapid expansion of a free and open internet where millions of people a second can express themselves and interact in real time without tech giants having to monitor and vet everything they say. A lawsuit victory against Google will narrow the scope of Section 230 and the functionality of algorithms while forcing platforms to censor or police more.

Section 230 ensures that Google won’t be held liable for merely hosting user-submitted terrorist propaganda before it was identified and taken down. However, the proposition that these protections extend to algorithms that recommend terrorist content remains untested in court. But there’s no reason why they shouldn’t. The sheer volume of content hosted on platforms like YouTube means that automated algorithms for sorting, ranking and highlighting content in ways helpful to users are essential to the platforms’ functionality. They’re as important to user experience as hosting the content itself. 

If platforms are held liable for their algorithms’ recommendations, they’d effectively be liable for third-party content all the time and may need to stop using algorithmic recommendations altogether to avoid litigation. This would mean an inferior consumer experience that makes it harder for us to find information and content relevant to us as individuals.

It would also mean more “shadow-banning” and censorship of controversial content, especially when it comes to human rights activists in countries with abusive governments, peaceful albeit fiery preachers of all faiths, or violent filmmakers whose videos have nothing to do with terrorism. Since it’s impossible to vet each submitted video for terrorism links even with a large moderation staff, tooling algorithms to block content that could merely be terrorist propaganda may become necessary. 

Conservative free speech advocates who oppose big-tech censorship should be worried. When YouTube cracked down on violent content in 2007, it led to activists exposing human rights abuse by Middle Eastern governments being de-platformed. Things will get even worse if platforms are pressured to take things further.

Holding platforms liable like this is unnecessary, even if taking down more extremist content would reduce radicalization. Laws like the Digital Millennium Copyright Act provide a notice-and-takedown process for specific illegal content, such as copyright infringement. This approach is limited to user-submitted content already identified as illegal and would reduce pressure on platforms to remove more content in general.

Combating terrorism and holding big tech accountable for genuine wrongdoing shouldn’t involve precedents or radical laws that make the internet less free and useful for us all.

Originally published here

Share

Follow:

More Posts

Subscribe to our Newsletter

Scroll to top
en_USEN

Follow us

Contact Info

712 H St NE PMB 94982
Washington, DC 20002

© COPYRIGHT 2024, CONSUMER CHOICE CENTER