fbpx

Search for: section 230

An Illustration of Why Section 230 Should Be Preserved, Not Scrapped

Removing Section 230 would stifle engagement and interaction in the online realm.

Section 230 of the 1996 Communications Decency Act is currently being called into question by lawmakers, and this raises red flags for both producers and consumers within the online realm.

Section 230 states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

If the granting of this protection were to be removed, any online sharing site (ranging from food blogs to Facebook pages to search engines to simulation games) could be held accountable for the online activity of users and affiliates.

For example, the current questionable image from Jaimie Lee Curtis’ account would hold Instagram as liable for the contested offensive or artistic picture featured in the post.

And while Big Tech may be able to bulk up capacity to counter claims for such cases, social media startups and casual content creators better beware.

It is not a matter of whether online activity should be moderated, but rather who does the moderating. If Section 230 protections were to be removed, this would discourage the creation of new social networking sites and create a mandate for an online surveillance state.

So, since some political officials believe online service providers should be held liable for suggestions, search results, and social feeds, and given that hearings on the Hill have exposed an ineptitude for most things tech related, here is an illustration of how Section 230 plays out in an offline scenario.

Read the full text here

Biden Administration’s abandonment of Section 230 undermines tech innovation that will harm and disadvantage consumers

Washington, D.C. – Yesterday, lawyers from the Biden Administration filed an amicus brief in a Supreme Court case that will undermine future American tech innovation and inevitably harm and disadvantage online consumers.

In Gonzalez v. Google, the Supreme Court is asked to decide whether YouTube can be held liable for content on its platform, and more specifically its algorithms. The argument brought by plaintiffs is that the algorithm that recommends content based on user preference is not covered by Section 230 of the Communications and Decency Act, and other legislation, and that Google (YouTube’s parent company) can be held liable.

Such a ruling would have a sweeping impact on Internet freedom of speech and tech innovation based here in the U.S.

Yaël Ossowski, deputy director of the consumer advocacy group Consumer Choice Center, responds:

“In a global race to defend freedom and innovation online, it’s beyond disappointing to see the Biden Administration take a position that undermines Section 230, American digital entrepreneurship, and freedom of speech online,” said Ossowski.

“China and the EU are promoting and subsidizing their tech companies and future start-ups massively while our own officials are trying to kneecap them, whether by antitrust litigation by the Federal Trade Commission, Senate bills to break up tech firms, or general hostility to the growth and innovation that Section 230 has afforded to the benefit of consumers,” he said.

“The Biden Administration’s abandonment of Section 230 is concerning and puts much at risk for consumers online.

“The ability of digital entrepreneurs to offer unique and tailored services to consumers who enjoy them would be severely constrained if a Supreme Court ruling upends our modern understanding of the legal system’s protection of platforms online. Added to that, it threatens free speech on the Internet if platforms have an undue obligation to perform content moderation so as to avoid any and all legal liabilities posed by user-generated content.

“For the sake of consumers and American innovation, we hope that an eventual ruling protects the core of our freedom of speech and association online, and protects citizens’ choices to use the services they want. Thus far, the Biden Administration’s views leave us concerned that this is in peril,” he concluded.

Learn more about the Consumer Choice Center’s campaigns for smart policies on tech innovation.

If Brendan Carr is reconfirmed to the FCC, how will consumers fare?

CCC Managing Director, Fred Roder (left), FCC’s Brendan Carr (middle), CCC Deputy Director Yaël Ossowski (right)

On Monday, President Joe Biden re-nominated Brendan Carr to the Federal Communications Commission. For consumer advocates like us at the Consumer Choice Center who work on many issues related to tech innovation and the protection of our rights online, that’s welcome news.

Now, the US Senate must confirm Carr’s nomination. It would be a welcome opportunity to continue efforts and opportunities to both support and defend consumer choice.

Throughout his tenure at the chief telecom regulator, Carr has chiseled out his space as a principled voice and worthy fighter for many consumers issues.

His dedication to the expansion of rural broadband access, smart investment in telecom and Internet infrastructure, and common-sense rules to help facilitate American ingenuity and entrepreneurship stand out as some major achievements.

Whether it was the repeal of Title II classification for Internet Service Providers (net neutrality), the protection of free speech, or his desire to address the influence of the Chinese Communist Party through TikTok and other platforms, Carr has never missed an opportunity to an evidenced-based approach vital to policymaking.

We hope to continue working with Commissioner Carr in his new tenure despite some disagreements on the nuances of specific policies because we believe he is earnest, sincere, and willing to hear arguments and policy cases from all sides of the aisle. There will be many opportunities to ensure policies are in the interest of consumers.

Issues such as online free speech, upholding Section 230, and how best to avoid government interference in content moderation will prove to be pivotal issues in the next term, and it will be of great benefit to a wide spectrum of American consumers to have someone like Brendan Carr at the helm.

If US Senators confirm Carr for another tenure, we look forward to working together for smart policies to benefit consumers around the country.

Here is a clip of our conversation with FCC Commissioner Carr on Consumer Choice Radio:

Online Security Concerns Shouldn’t Enable a Surveillance State

At the 2012 London Olympics, Sir Tim Berners-Lee, creator of the World Wide Web, crafted the message “This Is For Everyone.” And at that time digitized opportunities felt limitless. Now, a little more than a decade later, that message might read “This is for Everyone – Pending Oversight and Approval.”

Indeed, tech accountability proposals and high profile hearings with Silicon’s finest were plentiful last year and this year shows no signs of slowing down. Governmental officials of both parties have proven to have a never-ending interest in meddling in online anonymity, as the recently proposed RESTRICT Act shows.

RESTRICT stands for Restricting the Emergence of Security Threats that Risk Information and Communication Technology – the name says it all. 

Essentially, this act grants the Department of Commerce the authority to interfere with any data of any user and prosecute any activity based on any possibility of a threat – and any disapproval for interference derived from Congress can only be brought forth after the fact. If this sounds out of proportion, read it for yourself.

While other proposed bills, such as Section 230, have (wrongly) placed service providers and social media networks as the target for regulation, the RESTRICT Act applies to everyone.

Under the RESTRICT Act, all internet-based interactions and transactions would be subject to surveillance and scrutiny, which is why some have dubbed the RESTRICT Act to be ‘the Patriot Act 2.0.’ Such an assertion, however, is too kind, since the ‘sneak and peek’ approaches that were allowed under the Patriot Act pale in comparison to the constant oversight of online affairs that the RESTRICT Act would enable.

It is also worth noting that the Patriot Act was set to expire in 2005 but, like many government programs, it has been preserved and currently lives on under the USA Freedom Act of 2015. And although the USA Freedom Act had a planned expiration date set for 2020, it is also still hanging on.

It seems unlikely the RESTRICT Act will gain any real traction given its extreme nature, but proposals like these act as prototypes or concept tests for what might come next – and stranger things have happened.

It was just a little over a year ago, for example, when the Biden Administration launched the Disinformation Governance Board, aka the ‘Ministry of Truth.’ Nina Jankowicz, the appointed ‘disinformation czar,’ went viral on TikTok with a revamped (and ridiculed) rendition of ‘Supercalifragilisticexpialidocious,’ and backlash quickly ensued as the board was evidently too Orwellian for the American public to stomach. 

The states are getting in on the act too. Take for example the Arkansas legislature’s recent passing of an “online youth safety” bill, which itself mirrors a law which Utah passed last month. 

Arkansas’s Social Media Safety Act, signed by Gov. Sanders, requires all online users to prove whether they are age-appropriate for certain platforms and content, which thereby necessitates the collection of biometric and personal data for ID verification. 

Any online anonymity or semblance of data privacy has been revoked by the state in the name of safeguarding children. Yaël Ossowski, deputy director of the consumer advocacy group Consumer Choice Center, rightly asserts that the government is now poised to be “the final arbiter of whether young people access the Internet at all.” 

Parental ability (and responsibility) to play a part in the digital lives of their children is being delegated to government bureaucrats, and it won’t be long until other state legislatures follow suit. Connecticut looks to be next.

What is truly disturbing about these laws is that they enable government overreach in places that the market has already been providing solutions for online child safety. Concerns over data management and data access have resulted in cyber security’s being one of the fastest growing markets, with lucrative positions for those studying to be information analysts and data scientists. 

As it so happens, none other than Sir Tim Berners-Lee has launched a decentralization project to tackle data rights management. His is one of many initiatives that should be incentivised by user interests and left unencumbered by political interference

Historical and empirical evidence proves that a decentralized economy leads to progress and prosperity, so we should enable our digital economy with the same approach. 

Originally published here

Lawsuit Against Google’s Algorithms Could End the Internet As We Know It

A lawsuit against Google seeks to hold tech giants and online media platforms liable for their algorithms’ recommendations of third-party content in the name of combating terrorism. A victory against Google wouldn’t make us safer, but it could drastically undermine the very functioning of the internet itself.

The Supreme Court case is Gonzalez v. Google. The Gonzalez family is related to Nohemi Gonzalez, an American tragically killed in a terror attack by ISIS. They are suing Google, YouTube’s parent company, for not doing enough to block ISIS from using its website to host recruitment videos while recommending such content to users via automated algorithms. They rely on antiterrorism laws allowing damages to be claimed from “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”

If this seems like a stretch, that’s because it is. It’s unclear whether videos hosted on YouTube directly led to any terror attack or whether any other influences were primarily responsible for radicalizing the perpetrators. Google already has policies against terrorist content and employs a moderation team to identify and remove it, although the process isn’t always immediate. Automated recommendations typically work by suggesting content similar to what users have viewed since it’s most likely to be interesting and relevant to them on a website that hosts millions of videos. 

Platforms are also shielded from liability for what their users post and are even permitted to engage in good-faith moderation, curation and filtration of third-party content without being branded publishers of it. This is thanks to Section 230, the law that has allowed for the rapid expansion of a free and open internet where millions of people a second can express themselves and interact in real time without tech giants having to monitor and vet everything they say. A lawsuit victory against Google will narrow the scope of Section 230 and the functionality of algorithms while forcing platforms to censor or police more.

Section 230 ensures that Google won’t be held liable for merely hosting user-submitted terrorist propaganda before it was identified and taken down. However, the proposition that these protections extend to algorithms that recommend terrorist content remains untested in court. But there’s no reason why they shouldn’t. The sheer volume of content hosted on platforms like YouTube means that automated algorithms for sorting, ranking and highlighting content in ways helpful to users are essential to the platforms’ functionality. They’re as important to user experience as hosting the content itself. 

If platforms are held liable for their algorithms’ recommendations, they’d effectively be liable for third-party content all the time and may need to stop using algorithmic recommendations altogether to avoid litigation. This would mean an inferior consumer experience that makes it harder for us to find information and content relevant to us as individuals.

It would also mean more “shadow-banning” and censorship of controversial content, especially when it comes to human rights activists in countries with abusive governments, peaceful albeit fiery preachers of all faiths, or violent filmmakers whose videos have nothing to do with terrorism. Since it’s impossible to vet each submitted video for terrorism links even with a large moderation staff, tooling algorithms to block content that could merely be terrorist propaganda may become necessary. 

Conservative free speech advocates who oppose big-tech censorship should be worried. When YouTube cracked down on violent content in 2007, it led to activists exposing human rights abuse by Middle Eastern governments being de-platformed. Things will get even worse if platforms are pressured to take things further.

Holding platforms liable like this is unnecessary, even if taking down more extremist content would reduce radicalization. Laws like the Digital Millennium Copyright Act provide a notice-and-takedown process for specific illegal content, such as copyright infringement. This approach is limited to user-submitted content already identified as illegal and would reduce pressure on platforms to remove more content in general.

Combating terrorism and holding big tech accountable for genuine wrongdoing shouldn’t involve precedents or radical laws that make the internet less free and useful for us all.

Originally published here

The best answer to TikTok is a forced divestiture 

As consumer advocates, we pride ourselves as standing for policies that promote policies fit for growth, lifestyle freedom, and tech innovation. 

In usual regulatory circumstances, that means protecting consumers’ platform and tech choices  from the zealous hands of regulators and government officials who would otherwise seek to shred basic Internet protections and freedom of speech, as well as break up innovative tech companies. Think Section 230, government jawboning, and consequences of deplatforming.

As such, the antitrust crusades by select politicians and agency heads in the United States and Europe are of primary concern for consumer choice. We have written extensively about this, and better ways forward. Many of these platforms make mistakes and severe errors on content moderation, often in response to regulatory concerns. But that does not invite trust-busting politicians and regulators to meddle with companies that consumers value.

In the background of each of these legislative battles and proposals, however, there is a special example found in the Chinese-owned firm TikTok, today one of the most popular social apps on the planet. 

The Special Case of TikTok

Now owned by Bytedance, TikTok offers a similar user experience to Instagram Reels, Snapchat, or Twitter, but is supercharged by an algorithm that serves up short videos that entice users with constant content that autoloads and scrolls by. Many social phenomena, dances, and memes propagate via TikTok.

In terms of tech innovation and its proprietary algorithm, TikTok is a dime a dozen. There is a reason it is one of the most downloaded apps on mobile devices in virtually every market and language. 

Researchers have already revealed that China’s own domestic version of TikTok, Douyin, restricts content for younger users. Instead of dances and memes, Douyin features science experiments, educational material, and time limits for underage users. TikTok, on the other hand, seems to have a suped-up algorithm that has an ability to better attract, and hook, younger children.

What makes it special for consumer concern beyond the content, however, is its ownership, privacy policies, and  far-too-cozy relationship with the leadership of the Chinese Communist Party, the same party that oversees concentration camps of its Muslim minority and repeatedly quashes human rights across its territories.

It has already been revealed that European users of the TikTok can, and have, had their data accessed by company officials in Beijing. And the same goes for US users. Considering the ownership location and structure, there isn’t much that can be done about this.

Unlike tech companies in liberal democracies, Chinese firms require direct corporate oversight and governance by Chinese Communist Party officials – often military personnel. In the context of a construction company or domestic news publisher, this doesn’t seemingly put consumers in liberal democracies at risk. But a popular tech app downloaded on the phones of hundreds of millions of users? That is a different story.

How best to address TikTok in a way that upholds liberal democratic values

Among liberal democracies, there are a myriad of opinions about how to approach the TikTok beast.

US FCC Commissioner Brendan Carr wants a total ban, much in line with Sen. Josh Hawley’s proposed ban in the U.S. Senate and U.S. Rep. Ken Buck’s similar ban in the House. But there are other ways that would be more in line with liberal democratic values.

One solution we would propose, much in line with the last US administration’s stance, would be a forced divestiture to a U.S.-based entity on national security grounds. This would mean a sale of US assets (or assets in liberal democracies) to an entity based in those countries that would be completely independent of any CCP influence.

In 2019-2020, when President Donald Trump floated this idea, a proposed buyer of TikTok’s U.S. assets would have been Microsoft, and later Oracle. But the deal fell through.

But this solution is not unique.

We have already seen such actions play out with vital companies in the healthcare space, including PatientsLikeMe, which uses sensitive medical data and real-time data to connect patients about their conditions and proposed treatments. 

When the firm was flooded with investments from Chinese partners, the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) ruled that a forced divestiture would have to take place. The same has been applied to a Chinese ownership stake in Holu Hou Energy, a U.S.-subsidiary energy storage company.

In vital matters of energy and popular consumer technology controlled by elements of the Chinese Communist Party, a forced divestiture to a company regulated and overseen by regulators in liberal democratic nations seems to be the most prudent measure.

This has not yet been attempted for a wholly-owned foreign entity active in the US, but we can see why the same concerns apply.

An outright ban or restriction of an app would not pass constitutional muster in the US, and would have chilling effects for future innovation that would reverberate beyond consumer technology.

This is a controversial topic, and one that will require nuanced solutions. Whatever the outcome, we hope consumers will be better off, and that liberal democracies can agree on a common solution that continues to uphold our liberties and choices as consumers.

Yaël Ossowski is the deputy director of the Consumer Choice Center.

Why Democratic Control of the FCC Won’t Bode Well for Internet Freedom

By Yaël Ossowski

Late Tuesday afternoon, President Joe Biden revealed his nominations to the Federal Communications Commission.

As one would expect, his two nominations — Jessica Rosenworcel and Gigi Sohn — come from Democratic circles and have upheld progressive priorities for telecom policies.

Rosenworcel has been a commissioner since 2012 and served as acting chair since Ajit Pai left at the beginning of Biden’s term. She would be the first female chair of the FCC.

Sohn has been active in left-leaning nonprofits, but also worked as a counselor to former FCC chair Tom Wheeler. She has made a career in advocacy, government, and academia championing “open, affordable, and democratic communications networks,” according to the White House release.

What both nominees represent, if confirmed by the Senate, would be a return to a Democratic-majority FCC intent on revitalizing 2015-era “net neutrality” proposals. Activists are already celebrating a return to progressive policymaking at the nation’s telecom regulator.

While Biden’s nominations are no surprise — every president generally nominates commissioners from their own party — consumer advocates should be worried about the policy goals they will seek to pass.

Net Neutrality

The most pressing would be a reform of Title II regulations through “net neutrality”, effectively labeling Internet Service Providers as public utilities, essentially as protected monopolies.

As I wrote in the Washington Examiner in 2017, the basic premise of net neutrality reforms is to regulate ISPs like water suppliers or telephone companies, subjecting them to more active enforcement, standards, and regulations set by the FCC, so that all online traffic be considered “neutral and free from prioritization”.

What’s more, a Title II classification would treat ISPs are monopolies, which even by the most strained definition, cannot be true. There are close to 3,000 ISPs in the United States, all serving different populations and regions, though some players have larger coverage than others.

Sweeping these companies into the regulatory lens of the FCC under the auspices of public utilities would mean more restrictions and regulations on content and delivery of content on the Internet — a far cry from Internet freedom.

As a general principle for an open net, net neutrality is an important one. When internet providers have been accused of unfairly blocking or throttling consumers, they have rightfully been challenged by lawsuits and enforcement actions from the Federal Trade Commission. And we should generally want a system that won’t discriminate against Internet users based on the content they host or provide (we can also thank Section 230 for liability protections for online platforms).

However, since these regulations were proposed in 2014 under the Obama administration, there has never been a clear rationale provided as to why Internet companies should be regulated under the FCC rather than the FTC, as is the status quo. And from what we can tell, that change would likely impact consumers more than anyone.

For one, a public utility classification would mean much far-reaching power of centralized Internet regulation than exists currently, putting the innovative nature of the Internet at risk.

Providers would be tasked with significant regulatory compliance that would necessitate more administrative costs and fees. This would also threaten the expansion of start-ups and independent companies in the digital space, souring the efforts at creative entrepreneurship. All would be harmful to consumers.

With every successive administration in Washington, we can only imagine that enforcement of the rules and changing of the rules would be enough to create regulatory uncertainty for thousands of online businesses and the users who depend on them.

Second, as our experience from the history of public utilities demonstrates, there would likely be intense consolidation that would empower large companies with the means to comply with regulations and stunt innovative new start-ups. It would also disincentivize increased private investment in broadband services, as we have written about at the Consumer Choice Center, and exacerbate the effects of Biden’s infrastructure proposal on public broadband if it passes this fall.

While consolidation of ISPs is a grave concern to progressive Internet activists, this would only be made worse once a giant bureaucracy such as the FCC is given regulatory authority over them. As my colleague Elizabeth Hicks noted in the Detroit Times, often it is state and local regulations that impede greater competition among ISPs, not because of lax authority at the federal level.

Online Privacy

Both Rosenworcel and Sohn have also indicated that they would support a proposal for greater Internet privacy enforced by the FCC. While that would be great on principle, we would hope that a federal plan would punish bad actors and establish clear guidelines to ensure transparency and protect innovation, as we proposed in our data and consumer privacy policy note.

However, Sohn’s previous public statements, including when she was a fellow at the Open Society Foundation, demonstrate she’d want a wholesale restriction on the sharing of data, even among willing consumers and providers. That would put many vital services at risk.

What’s more, such a proposal would likely aim to further empower government enforcement on data privacy rather than embrace market innovations that already do just that.

Prices

Another significant area where a Democratic-majority FCC could seek action would be on the pricing of Internet services. Sohn has been quite vocal about fixing ISP prices and regulating the bundling of various services. This would undermine the competitive environment of ISPs and likely lead to lower quality and rationed services for users, degrading everyone’s Internet experience.

Sohn’s history at various nonprofit groups that have targeted and lobbied the FCC for more enforcement was indeed impactful, and it is not difficult to see how much of the outrage about net neutrality was due to these efforts. Unfortunately, this also coincided with serious death threats and security concerns for commissioners opposed to these plans.

If both nominees to the FCC are confirmed, it is clear that the battle for the open Internet will once again be relitigated. And if the past proposal is any indication, it will face significant opposition.

At the time of the original net neutrality rules, even the Electronic Frontier Foundation, seen as one of the most powerful Internet freedom groups, was skeptical about how far-reaching the net neutrality provisions were.

We can only imagine that now, buoyed by progressive victories on Capitol Hill and louder voices for regulating content and platforms on the Internet, these proposals will prove harmful to the interests of online users and consumers.

Yaël Ossowski is the deputy director of the Consumer Choice Center.

Facebook failures may be real, but the case for increased censorship is weak

Once the so-called Facebook whistleblower revealed her identity and story, it was only a matter of time before the public imagination of one of the largest social networking sites would go off the rails.

What Frances Haugen released to the Wall Street Journal in her initial leaks, which it dubbed the “Facebook Files ,” detailed how Facebook had made decisions on which accounts to censor, survey data on Instagram use among teens, and the status of the civic integrity team tasked with countering misinformation around political topics.

Many of the revelations are fascinating, and some damning, but they point to a company bombarded with external and internal demands to censor accounts and pages that spread “misinformation” and “hateful” content. Who determines what that content is, and what classifies as such, is another point.

In the days since, Haugen has become a hero to critics of the social media giant on both the Right and the Left, animating these arguments before a Senate subcommittee on consumer protection on Tuesday.

It created the perfect theater for Washington lawmakers and media outlets, elevating conjecture, hyperbole, and feverish contempt for an online platform used by billions of users.

Congressional Republicans and Democrats are united in confronting Facebook, though they are animated by different reasons. Generally, Democrats say the platform does not censor enough content and want it to do more, evoking the “interference” in President Donald Trump’s 2016 victory. Republicans, on the other hand, believe the censorship is pointed in the wrong direction, often targeting conservative content creators, and would like to see more even-handedness.

“Facebook has caused and aggravated a lot of pain and profited off the spreading of disinformation, misinformation, and sowing hate,” said committee chairman Sen. Richard Blumenthal, who days before received ridicule for asking Instagram to ban the “finsta” program. (Finstas are fake Instagram accounts created by teenagers to avoid the prying eyes of parents.)

Facebook’s mistakes, especially when it comes to content moderation, are vast. I have joined countless others in pointing out the troubling examples of censorship that are all too often politically motivated. Considering it is a Silicon Valley firm staffed with tens of thousands of employees who likely lean left, it is not surprising.

But the incentive to censor content exists because of the huffing and puffing in Congress, whistleblowers like Haugen, and media pressure to conform to a narrow version of online free speech that has no parallel elsewhere.

Whether it is through the lens of antitrust, to break apart Facebook’s various divisions such as Instagram and WhatsApp, or by reforming Section 230 to make firms liable for all speech on their platforms, it is clear that heavy-handed social media regulation will have the greatest impact on users and generally make Facebook unbearable.

As much as some might like to castigate the unicorn start-up with tens of thousands of employees and a hefty stock price, it derives its power and influence as a platform for billions of individuals looking for connections.

A number of the posts on Facebook may be atrocious or wrong, and they deserved to be called out by those who see them. But in free societies, we prefer to debate bad ideas rather than relegate them to the darkened reaches of society, where they will only fester and grow unabated.

Expecting or forcing Facebook to ramp up censorship will make the platform a de facto arm of our federal agencies rather than a free platform for connecting with friends and family.

While there are many positive reforms that could be invoked in the wake of the Facebook moment, a national privacy and data law, for example, we know it will be the users of these platforms who will ultimately suffer from misguided regulation.

If we believe in free speech and an open internet, it is our responsibility to advocate sane, smart, and effective rules on innovative technologies, not laws or edicts that only seek to punish and restrict what people can say online. We as users and citizens deserve better.

Originally published here

Dowden’s latest task? Regulating the internet. Here’s what Australia can teach us about that challenge.

Culture secretary Oliver Dowden finds himself burdened with an almighty task: regulating the internet. His new ‘Digital Markets Unit’, set to form part of the existing Competitions and Markets Authority, will be the quango in charge of regulating the social media giants. Dowden, like the rest of us, is now trying to discern what can be learned by rummaging through the rubble left behind by the regulatory punch-up between Facebook and the Australian government over a new law forcing online platforms to pay news companies in order to host links to their content.

Google acquiesced immediately, agreeing to government-mandated negotiations with news producers. But Facebook looked ready to put up a fight, following through on its threat to axe all news content from its Australian services. It wasn’t long, though, before Mark Zuckerberg backed down, unblocked the Facebook pages of Australian newspapers and, through gritted teeth, agreed to set up a direct debit to Rupert Murdoch.

The drama down under has been met with a mixed response around the world, but it is broadly consistent with the trend of governments shifting towards more and more harmful and intrusive interference in the technology sector, directly undermining consumers’ interests and lining Murdoch’s pockets. The EU, for one, is keen to get stuck in, disregarding the status quo and unveiling its ambitious plan to keep tabs on the tech giants.

In the US, the situation is rather different. Some conspiracy theorists – the type who continue to believe that Donald Trump is the rightful president of the United States – like to allege that the infamous Section 230, the item of US legislation which effectively regulates social media there, was crafted in cahoots with big tech lobbyists as a favour to bigwigs at Facebook, Google, Twitter, and so on. In reality, Section 230 was passed as part of the Communications Decency Act in 1996, long before any of those companies existed.

Wildly overhyped by many as a grand DC-Silicon Valley conspiracy to shut down the right’s online presence, Section 230 is actually very short and very simple. It is, in fact, just 26 words long: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Not only is this a good starting point from which to go about regulating the internet – it is the only workable starting point. If the opposite were true – if platforms were treated as publishers and held liable for the content posted by their users – competition would suffer immensely. Incumbent giants like Facebook would have no problem employing a small army of content moderators to insulate themselves, solidifying their position at the top of the food chain. Meanwhile, smaller companies – the Zuckerbergs of tomorrow – would be unable to keep up, resulting in a grinding halt to innovation and competition.

Another unintended consequence – a clear theme when it comes to undue government meddling in complex matters – would be that vibrant online spaces would quickly become unusable as companies scramble to moderate platforms to within an inch of their lives in order to inoculate themselves against legal peril.

Even with the protections currently in place, it is plain how awful platforms are at moderating content. There are thousands of examples of well-intentioned moderation gone wrong. In January, the Entrepreneurs Network’s Sam Dumitriu found himself plonked in Twitter jail for a tweet containing the words “vaccine” and “microchip” in an attempt to call out a NIMBY’s faulty logic. Abandoning the fundamental Section 230 provision would only make this problem much, much worse by forcing platforms to moderate much more aggressively than they already do.

Centralisation of policy in this area fails consistently whether it comes from governments or the private sector because it is necessarily arbitrary and prone to human error. When Facebook tried to block Australian news outlets, it also accidentally barred the UK-based output of Sky News and the Telegraph, both of which have Australian namesakes. State-sanctioned centralisation of policy, though, is all the more dangerous, especially now that governments seem content to tear up the rulebook and run riot over the norms of the industry almost at random, resulting in interventions which are both ineffectual and harmful.

The Australian intervention in the market is so arbitrary that it could easily have been the other way around: forcing News Corp to pay Facebook for the privilege of having its content shared freely by people all over the world. Perhaps the policy would even make more sense that way round. If someone was offering news outlets a promotional package with a reach comparable to Facebook’s usership, the value of that package on the ad market would be enormous.

Making people pay to have their links shared makes no sense at all. Never in the history of the internet has anybody had to pay to share a link. In fact, the way the internet works is precisely the opposite: individuals and companies regularly fork out large sums of money in order to put their links on more people’s screens.

If you’d said to a newspaper editor twenty years ago that they would soon have free access to virtual networks where worldwide promotion of their content would be powered by organic sharing, they would have leapt for joy. A regulator coming along and decreeing that the provider of that free service now owes money to the newspaper editor is patently ludicrous.

That is not to say, however, that there is no role for a regulator to play. But whether or not the Digital Markets Unit will manage to avoid the minefield of over-regulation remains to be seen. As things stand, there is a very real danger that we might slip down that road. Matt Hancock enthusiastically endorsed the Australian government’s approach, and Oliver Dowden has reportedly been chatting with his counterparts down under about this topic.

The humdrum of discourse over this policy area was already growing, but the Australia-Facebook debacle has ignited it. The stars have aligned such that 2021 is the long-awaited point when the world’s governments finally attempt to reckon with the tech behemoths. From the US to Brussels, from Australia to the Baltics, the amount of attention being paid to this issue is booming.

As UK government policy begins to take shape, expect to see fronts forming between different factions within the Conservative Party on this issue. When it comes to material consequences in Britain, it is not yet clear what all this will mean. The Digital Markets Unit could yet be a hero or a villain.

Originally published here.

Latest round of online deplatforming shows why we need increased competition and decentralization

Another week means another politically-charged rampage of deplatforming of social media profiles and entire social media networks.

Following the storming of the U.S. Capitol by some of his supporters, President Trump was promptly suspended from Twitter and Facebook and later dozens of Internet services including Shopify and Twitch.

Even the image-sharing site Pinterest, famous for recipes and DIY project presentations, has banned Trump and any mention of contesting the 2020 Election. He’ll have to go without sourdough recipes and needlework templates once he’s out of office.

Beyond Trump, entire social media networks have also been put in the crosshairs following the troubling incursion on Capitol Hill. The conservative platform Parler, a refuge for social media dissidents, has since had its app pulled from the Google and Apple stores and had their hosting servers suspended by Amazon’s web service company AWS.

This pattern of removing unsavory profiles or websites isn’t just a 2021 phenomenon. The whistleblower website Wikileaks – whose founder Julian Assange remains in prison without bail in the UK awaiting extradition to the United States – was similarly removed from Amazon’s servers in 2012, as well as blacklisted by Visa, Mastercard, PayPal, and their DNS provider. Documents reveal both public and private pressure by then U.S. senator and Intelligence Committee Chairman Joe Lieberman was instrumental in choking Wikileaks off from these services.

Then it was politicians pressuring companies to silence a private organization. Now, it’s private organizations urging companies to silence politicians.

However the pendulum swings, it’s entirely reasonable for companies that provide services to consumers and institutions to respond quickly to avoid risk. Whether it’s due to governmental decree or public backlash, firms must respond to incentives that ensure their success and survival.

Whether it’s Facebook, Twitter, Gab, or Parler, they can only exist and thrive if they fulfill the wishes and demands of their users, and increasingly to the political and social pressures placed on them by a cacophony of powerful forces.

It’s an impossible tightrope.

It is clear that many of these companies have and will continue to make bad business decisions based on either politics or perception of bias. They are far from perfect.

The only true way we can ensure a healthy balance of information and services provided by these companies to their consumers is by promoting competition and decentralization.

Having diverse alternative services to host servers, provide social networks, and allow people to communicate remains in the best interest of all users and consumers.

Such a mantra is difficult to hold in today’s hostile ideological battleground inflated by Silicon Valley, Washington, and hostile actors in Bejing and Moscow, but it is necessary.

In the realm of policy, we should be wary of proposed solutions that aim to cut off some services at the expense of others.

Repealing Section 230 of the Communications Decency Act, for example, would be incredibly harmful to users and firms alike. If platforms become legally liable for user content, it would essentially turn innovative tech companies into risk-averting insurance companies that occasionally offer data services. That would be terrible for innovation and user experience.

And considering the politically charged nature of our current discourse, anyone could find a reason to cancel you or an organization you hold dear – meaning you’re more at risk for being deplatformed.

At the same time, axing Section 230 would empower large firms and institutions that already have the resources to manage content policing and legal issues at scale, locking out many start-ups and aspiring competitors who otherwise would have been able to thrive.

When we think of the towering power of Big Tech and Big Government, some things can be true all at the same time. It can be a bad idea to use antitrust law to break up tech firms as it will deprive consumers of choice, just as these companies are guilty of making bad business decisions that will hurt their user base. How we respond to that will determine how consumers will continue to be able to use online services going forward.

All the while, every individual Internet user and organization has it in their power to use competitive and diverse services. Anyone can start up an instance of Mastodon (as I have done), a decentralized micro-blogging service, host a private web server on a Raspberry Pi (coming soon), or accept Bitcoin rather than credit cards.

Thanks to competition and innovation, we have consumer choice. The question is, though, if we’re courageous enough to use them.

Yaël Ossowski is deputy director at the Consumer Choice Center.

Scroll to top