fbpx

Search for: section 230

An Illustration of Why Section 230 Should Be Preserved, Not Scrapped

Removing Section 230 would stifle engagement and interaction in the online realm.

Section 230 of the 1996 Communications Decency Act is currently being called into question by lawmakers, and this raises red flags for both producers and consumers within the online realm.

Section 230 states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

If the granting of this protection were to be removed, any online sharing site (ranging from food blogs to Facebook pages to search engines to simulation games) could be held accountable for the online activity of users and affiliates.

For example, the current questionable image from Jaimie Lee Curtis’ account would hold Instagram as liable for the contested offensive or artistic picture featured in the post.

And while Big Tech may be able to bulk up capacity to counter claims for such cases, social media startups and casual content creators better beware.

It is not a matter of whether online activity should be moderated, but rather who does the moderating. If Section 230 protections were to be removed, this would discourage the creation of new social networking sites and create a mandate for an online surveillance state.

So, since some political officials believe online service providers should be held liable for suggestions, search results, and social feeds, and given that hearings on the Hill have exposed an ineptitude for most things tech related, here is an illustration of how Section 230 plays out in an offline scenario.

Read the full text here

Biden Administration’s abandonment of Section 230 undermines tech innovation that will harm and disadvantage consumers

Washington, D.C. – Yesterday, lawyers from the Biden Administration filed an amicus brief in a Supreme Court case that will undermine future American tech innovation and inevitably harm and disadvantage online consumers.

In Gonzalez v. Google, the Supreme Court is asked to decide whether YouTube can be held liable for content on its platform, and more specifically its algorithms. The argument brought by plaintiffs is that the algorithm that recommends content based on user preference is not covered by Section 230 of the Communications and Decency Act, and other legislation, and that Google (YouTube’s parent company) can be held liable.

Such a ruling would have a sweeping impact on Internet freedom of speech and tech innovation based here in the U.S.

Yaël Ossowski, deputy director of the consumer advocacy group Consumer Choice Center, responds:

“In a global race to defend freedom and innovation online, it’s beyond disappointing to see the Biden Administration take a position that undermines Section 230, American digital entrepreneurship, and freedom of speech online,” said Ossowski.

“China and the EU are promoting and subsidizing their tech companies and future start-ups massively while our own officials are trying to kneecap them, whether by antitrust litigation by the Federal Trade Commission, Senate bills to break up tech firms, or general hostility to the growth and innovation that Section 230 has afforded to the benefit of consumers,” he said.

“The Biden Administration’s abandonment of Section 230 is concerning and puts much at risk for consumers online.

“The ability of digital entrepreneurs to offer unique and tailored services to consumers who enjoy them would be severely constrained if a Supreme Court ruling upends our modern understanding of the legal system’s protection of platforms online. Added to that, it threatens free speech on the Internet if platforms have an undue obligation to perform content moderation so as to avoid any and all legal liabilities posed by user-generated content.

“For the sake of consumers and American innovation, we hope that an eventual ruling protects the core of our freedom of speech and association online, and protects citizens’ choices to use the services they want. Thus far, the Biden Administration’s views leave us concerned that this is in peril,” he concluded.

Learn more about the Consumer Choice Center’s campaigns for smart policies on tech innovation.

LES GÉANTS DU NET AMÉRICAINS DANS LE COLLIMATEUR DE L’UE

L’Europe a choisi de ne pas devenir le marché mondial pour les produits et services innovants, préférant devenir le terrain de jeu ultime des restrictions bureaucratiques. 

Récemment, le commissaire européen au Marché intérieur s’est rendu à San Francisco avec une importante délégation de bureaucrates. Sa mission : s’attaquer de front aux grandes entreprises technologiques américaines.

Le rôle important de Thierry Breton – ancien PDG de France Télécom et d’Atos, entre autres, mais aussi ex-ministre de l’Economie sous Jacques Chirac – au sein de l’organe exécutif de l’UE consiste à superviser le commerce dans le système du marché unique européen, qui compte près de 500 millions de consommateurs et de citoyens. Ce rôle lui confère un pouvoir considérable. Quel autre homme politique européen pourrait organiser des réunions avec Elon Musk, Mark Zuckerberg et Sam Altman en une seule journée ?

Bien que le mandat de M. Breton soit assez vaste – il couvre tous les domaines, du haut débit aux plateformes en ligne, en passant par le changement climatique –, son objectif à San Francisco était de rencontrer des géants de la technologie et des PDG américains afin de les préparer à l’application imminente de la loi sur les services numériques (Digital Services Act, DSA), une loi européenne globale destinée à créer un « espace numérique plus sûr » pour les Européens. Cette loi entrera en vigueur à la fin de ce mois d’août et imposera des dizaines de nouvelles obligations aux sociétés de l’Internet qui souhaitent servir des utilisateurs dans l’Union européenne.

Cette législation sur les services numériques pourrait être décrite comme le modèle réglementaire européen pour les grandes entreprises technologiques et l’Internet. Le seul problème est qu’une infime partie des entreprises visées par la loi sur les services numériques pour des restrictions ou des réglementations sont basées dans l’UE. Sur les 17 entreprises désignées comme « très grandes plateformes en ligne » par la loi – ce qui signifie qu’elles seront soumises à la réglementation et aux règles les plus contraignantes – une seule est basée en Europe : Il s’agit de Zalando, un commerce de mode en ligne.

La responsabilité des autres

Les autres viennent principalement… vous l’avez deviné… des Etats-Unis. Il s’agit d’entreprises telles que Meta, Twitter, Google, Snapchat et Amazon, mais aussi d’entreprises chinoises telles que TikTok et Alibaba.

Le DSA met en œuvre une série de restrictions et de règles étendues qui vont bien au-delà de toute réglementation américaine sur ces groupes : des limites sévères sur la publicité ciblée, une modération plus diligente des contenus pour supprimer ce que l’UE considère comme des contenus « illégaux », des protocoles pour éliminer la « désinformation », et bien d’autres choses encore.

Si l’on considère à quel point les grandes entreprises numériques ont été contraintes de censurer les utilisateurs pour apaiser les régulateurs aux Etats-Unis, la situation ne fera qu’empirer à l’étranger. Si les principaux objectifs du DSA sont bien intentionnés – préserver la vie privée des consommateurs et protéger les mineurs – la manière dont ces dispositions sont appliquées ou interprétées devrait préoccuper tous ceux d’entre nous qui croient en un web ouvert.

Tout d’abord, la désinformation et les contenus illégaux sont soumis à la responsabilité des plateformes.

Aux Etats-Unis, la section 230 du Communications Act de 1934 exempte les plateformes de toute responsabilité à l’égard des messages publiés par les utilisateurs. En Europe, toutes les grandes plateformes en ligne seront obligées de contrôler instantanément leurs utilisateurs ou de s’exposer à des sanctions sévères, tout en étant confrontées à des questions impossibles à résoudre. Les plateformes décideront-elles de ce qu’est la désinformation ou les gouvernements fourniront-ils des exemples ? Que se passera-t-il si un gouvernement se trompe, comme dans les premiers jours du Covid ? Ou s’il a des intentions plus malveillantes, comme dans les sociétés de surveillance non libres ?

« Réglementer d’abord, innover ensuite »

En l’absence d’une protection de la liberté d’expression comparable au premier amendement américain sur le continent européen, nous savons que les demandes de censure des fonctionnaires européens engloutiront bientôt des budgets entiers d’entreprises technologiques pour s’y conformer, de l’argent qui serait autrement utilisé pour apporter de la valeur aux utilisateurs. Cela en vaudra-t-il la peine ? La nouvelle plateforme de médias sociaux de Meta, Threads, n’a pas été lancée en Europe, très probablement parce que l’entreprise n’a pas la certitude qu’elle ne sera pas frappée par une réglementation stricte qu’elle n’est pas en mesure d’appliquer.

Nous savons que chaque plateforme a la capacité de modérer ou de censurer comme elle l’entend, mais cela se fait généralement par le biais de politiques et de codes internes que les utilisateurs acceptent volontairement, et non en réaction à un policier qui tient la matraque réglementaire. Plutôt que de se concentrer sur la restriction et la limitation des entreprises technologiques américaines, les Européens devraient faire tout leur possible pour changer leurs propres règles afin de favoriser l’innovation que la Silicon Valley a été en mesure de fournir pendant des décennies.

L’état d’esprit promulgué par Bruxelles est « réglementer d’abord, innover ensuite », dans l’espoir que le talent et les idées naîtront d’un environnement stable et réglementé. Si tel était le cas, nous aurions des dizaines de licornes technologiques européennes se disputant la domination mondiale. Au lieu de cela, il n’y en a pratiquement aucune. Ou bien elles ont été rachetées par une entreprise américaine.

L’Europe a choisi de ne pas devenir le marché test mondial pour les produits et services innovants, préférant devenir le terrain de jeu ultime des restrictions bureaucratiques et juridiques. Alors que certains politiciens et régulateurs américains peuvent regarder la situation d’un œil satisfait, il est clair que les consommateurs et les créateurs sont laissés pour compte sur le Vieux Continent, et que les utilisateurs américains seront bientôt dans le collimateur.

Originally published here

The EU’s ‘regulate first, innovate later’ mantra will sink U.S. tech firms

Last week, a bespeckled white-haired Frenchman strolled the streets of San Francisco in between high-profile meetings and uncomfortable photo ops.

With his horn-rimmed round glasses, wavy hair, and tailored suit, as well as a full entourage of slickly-dressed Europeans, the European Union Commissioner for the Internal Market, Thierry Breton, made his rounds in Silicon Valley.

Breton’s powerful role within the EU’s executive body is to oversee trade in Europe’s single market system, comprising nearly 500 million consumers and citizens. It makes him tremendously powerful. What other European politician could secure meetings with Elon Musk, Mark Zuckerberg, and Sam Altman in just one day?

While the mandate for Breton’s role is rather large — everything from broadband to online platforms, and climate change — his goal in San Francisco was to meet with US tech titans and CEOs to prepare them for the imminent enforcement of the Digital Services Act (DSA), an all-encompassing EU law intended to create a “safer digital space” for Europeans.

The law will come into force at the end of August and lay dozens of new obligations on internet companies that wish to serve users in the European bloc.

The DSA could best be described as Europe’s regulatory model for Big Tech and the Internet. The only problem? Only a sliver of the companies the Digital Services Act targets for restrictions or regulations are even based in the EU.

Out of the 17 companies designated “Very Large Online Platforms” by the law — meaning they will be held to the highest burden of regulation and rules — only one is based somewhere in Europe: Zalando, an online fashion retailer.

The rest are from…you guessed it…the United States. This includes firms such as Meta, Twitter, Google, Snapchat, and Amazon, but also Chinese firms such as TikTok and Alibaba.

The DSA enforces a litany of expansive restrictions and rules that go far beyond any US regulation: severe limits on targeted advertising, more diligent content moderation to remove what the EU deems “illegal” content, protocols for weeding out “disinformation”, and more.

Considering how much Big Tech has been forced to censor users to appease regulators in the free speech haven of the US, it will only get worse overseas.

While the principal aims of the DSA are well-intended — safeguarding consumer privacy and protecting minors — how these provisions are enforced or interpreted should concern all of us who believe in an open web.

To begin, there is platform liability attached to both disinformation and illegal content. In the US, we have Section 230, which exempts platforms from being liable for users’ posts. In Europe, every major online platform would be forced to instantly police its users or face severe penalties while still being weighed down by impossible questions.

Do platforms decide what is disinformation or will governments provide examples? What if a government gets it wrong, like in the early days of COVID? Or has more malicious intent like in unfree surveillance societies?

With no First Amendment-like protections for speech on the European continent, we know the censorious demands of European officials will soon swallow entire budgets of tech firms in order to comply, money that would otherwise be used to deliver value for users. Will it all be worth it?

We know that each platform has the ability to moderate or censor as they see fit, but this is usually done by internal policies and codes that users voluntarily accept, not reaction to a policeman holding the regulatory baton. Rather than focusing on restricting and limiting American tech firms, the Europeans should be doing everything possible to change their own rules in order to foster the innovation that Silicon Valley has been able to provide for decades.

The mindset promulgated from Brussels is “regulate first, innovate later,” in hopes that the talent and ideas will spring from a stable, regulated environment. If that were the case, we’d have dozens of European tech unicorns vying for global dominance. Instead, there are barely any. Or they’ve been bought up by an American company.

Europe has chosen to forgo becoming the world’s test market for innovative products and services, opting instead to be the ultimate playground of bureaucratic and legal restrictions. While some American politicians and regulators may look over with a gleeful eye, it is clear that consumers and creators are getting left behind on the Old Continent, and American users will soon be in the crosshairs.

Originally published here

If Brendan Carr is reconfirmed to the FCC, how will consumers fare?

CCC Managing Director, Fred Roder (left), FCC’s Brendan Carr (middle), CCC Deputy Director Yaël Ossowski (right)

On Monday, President Joe Biden re-nominated Brendan Carr to the Federal Communications Commission. For consumer advocates like us at the Consumer Choice Center who work on many issues related to tech innovation and the protection of our rights online, that’s welcome news.

Now, the US Senate must confirm Carr’s nomination. It would be a welcome opportunity to continue efforts and opportunities to both support and defend consumer choice.

Throughout his tenure at the chief telecom regulator, Carr has chiseled out his space as a principled voice and worthy fighter for many consumers issues.

His dedication to the expansion of rural broadband access, smart investment in telecom and Internet infrastructure, and common-sense rules to help facilitate American ingenuity and entrepreneurship stand out as some major achievements.

Whether it was the repeal of Title II classification for Internet Service Providers (net neutrality), the protection of free speech, or his desire to address the influence of the Chinese Communist Party through TikTok and other platforms, Carr has never missed an opportunity to an evidenced-based approach vital to policymaking.

We hope to continue working with Commissioner Carr in his new tenure despite some disagreements on the nuances of specific policies because we believe he is earnest, sincere, and willing to hear arguments and policy cases from all sides of the aisle. There will be many opportunities to ensure policies are in the interest of consumers.

Issues such as online free speech, upholding Section 230, and how best to avoid government interference in content moderation will prove to be pivotal issues in the next term, and it will be of great benefit to a wide spectrum of American consumers to have someone like Brendan Carr at the helm.

If US Senators confirm Carr for another tenure, we look forward to working together for smart policies to benefit consumers around the country.

Here is a clip of our conversation with FCC Commissioner Carr on Consumer Choice Radio:

Online Security Concerns Shouldn’t Enable a Surveillance State

At the 2012 London Olympics, Sir Tim Berners-Lee, creator of the World Wide Web, crafted the message “This Is For Everyone.” And at that time digitized opportunities felt limitless. Now, a little more than a decade later, that message might read “This is for Everyone – Pending Oversight and Approval.”

Indeed, tech accountability proposals and high profile hearings with Silicon’s finest were plentiful last year and this year shows no signs of slowing down. Governmental officials of both parties have proven to have a never-ending interest in meddling in online anonymity, as the recently proposed RESTRICT Act shows.

RESTRICT stands for Restricting the Emergence of Security Threats that Risk Information and Communication Technology – the name says it all. 

Essentially, this act grants the Department of Commerce the authority to interfere with any data of any user and prosecute any activity based on any possibility of a threat – and any disapproval for interference derived from Congress can only be brought forth after the fact. If this sounds out of proportion, read it for yourself.

While other proposed bills, such as Section 230, have (wrongly) placed service providers and social media networks as the target for regulation, the RESTRICT Act applies to everyone.

Under the RESTRICT Act, all internet-based interactions and transactions would be subject to surveillance and scrutiny, which is why some have dubbed the RESTRICT Act to be ‘the Patriot Act 2.0.’ Such an assertion, however, is too kind, since the ‘sneak and peek’ approaches that were allowed under the Patriot Act pale in comparison to the constant oversight of online affairs that the RESTRICT Act would enable.

It is also worth noting that the Patriot Act was set to expire in 2005 but, like many government programs, it has been preserved and currently lives on under the USA Freedom Act of 2015. And although the USA Freedom Act had a planned expiration date set for 2020, it is also still hanging on.

It seems unlikely the RESTRICT Act will gain any real traction given its extreme nature, but proposals like these act as prototypes or concept tests for what might come next – and stranger things have happened.

It was just a little over a year ago, for example, when the Biden Administration launched the Disinformation Governance Board, aka the ‘Ministry of Truth.’ Nina Jankowicz, the appointed ‘disinformation czar,’ went viral on TikTok with a revamped (and ridiculed) rendition of ‘Supercalifragilisticexpialidocious,’ and backlash quickly ensued as the board was evidently too Orwellian for the American public to stomach. 

The states are getting in on the act too. Take for example the Arkansas legislature’s recent passing of an “online youth safety” bill, which itself mirrors a law which Utah passed last month. 

Arkansas’s Social Media Safety Act, signed by Gov. Sanders, requires all online users to prove whether they are age-appropriate for certain platforms and content, which thereby necessitates the collection of biometric and personal data for ID verification. 

Any online anonymity or semblance of data privacy has been revoked by the state in the name of safeguarding children. Yaël Ossowski, deputy director of the consumer advocacy group Consumer Choice Center, rightly asserts that the government is now poised to be “the final arbiter of whether young people access the Internet at all.” 

Parental ability (and responsibility) to play a part in the digital lives of their children is being delegated to government bureaucrats, and it won’t be long until other state legislatures follow suit. Connecticut looks to be next.

What is truly disturbing about these laws is that they enable government overreach in places that the market has already been providing solutions for online child safety. Concerns over data management and data access have resulted in cyber security’s being one of the fastest growing markets, with lucrative positions for those studying to be information analysts and data scientists. 

As it so happens, none other than Sir Tim Berners-Lee has launched a decentralization project to tackle data rights management. His is one of many initiatives that should be incentivised by user interests and left unencumbered by political interference

Historical and empirical evidence proves that a decentralized economy leads to progress and prosperity, so we should enable our digital economy with the same approach. 

Originally published here

Lawsuit Against Google’s Algorithms Could End the Internet As We Know It

A lawsuit against Google seeks to hold tech giants and online media platforms liable for their algorithms’ recommendations of third-party content in the name of combating terrorism. A victory against Google wouldn’t make us safer, but it could drastically undermine the very functioning of the internet itself.

The Supreme Court case is Gonzalez v. Google. The Gonzalez family is related to Nohemi Gonzalez, an American tragically killed in a terror attack by ISIS. They are suing Google, YouTube’s parent company, for not doing enough to block ISIS from using its website to host recruitment videos while recommending such content to users via automated algorithms. They rely on antiterrorism laws allowing damages to be claimed from “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”

If this seems like a stretch, that’s because it is. It’s unclear whether videos hosted on YouTube directly led to any terror attack or whether any other influences were primarily responsible for radicalizing the perpetrators. Google already has policies against terrorist content and employs a moderation team to identify and remove it, although the process isn’t always immediate. Automated recommendations typically work by suggesting content similar to what users have viewed since it’s most likely to be interesting and relevant to them on a website that hosts millions of videos. 

Platforms are also shielded from liability for what their users post and are even permitted to engage in good-faith moderation, curation and filtration of third-party content without being branded publishers of it. This is thanks to Section 230, the law that has allowed for the rapid expansion of a free and open internet where millions of people a second can express themselves and interact in real time without tech giants having to monitor and vet everything they say. A lawsuit victory against Google will narrow the scope of Section 230 and the functionality of algorithms while forcing platforms to censor or police more.

Section 230 ensures that Google won’t be held liable for merely hosting user-submitted terrorist propaganda before it was identified and taken down. However, the proposition that these protections extend to algorithms that recommend terrorist content remains untested in court. But there’s no reason why they shouldn’t. The sheer volume of content hosted on platforms like YouTube means that automated algorithms for sorting, ranking and highlighting content in ways helpful to users are essential to the platforms’ functionality. They’re as important to user experience as hosting the content itself. 

If platforms are held liable for their algorithms’ recommendations, they’d effectively be liable for third-party content all the time and may need to stop using algorithmic recommendations altogether to avoid litigation. This would mean an inferior consumer experience that makes it harder for us to find information and content relevant to us as individuals.

It would also mean more “shadow-banning” and censorship of controversial content, especially when it comes to human rights activists in countries with abusive governments, peaceful albeit fiery preachers of all faiths, or violent filmmakers whose videos have nothing to do with terrorism. Since it’s impossible to vet each submitted video for terrorism links even with a large moderation staff, tooling algorithms to block content that could merely be terrorist propaganda may become necessary. 

Conservative free speech advocates who oppose big-tech censorship should be worried. When YouTube cracked down on violent content in 2007, it led to activists exposing human rights abuse by Middle Eastern governments being de-platformed. Things will get even worse if platforms are pressured to take things further.

Holding platforms liable like this is unnecessary, even if taking down more extremist content would reduce radicalization. Laws like the Digital Millennium Copyright Act provide a notice-and-takedown process for specific illegal content, such as copyright infringement. This approach is limited to user-submitted content already identified as illegal and would reduce pressure on platforms to remove more content in general.

Combating terrorism and holding big tech accountable for genuine wrongdoing shouldn’t involve precedents or radical laws that make the internet less free and useful for us all.

Originally published here

The best answer to TikTok is a forced divestiture 

As consumer advocates, we pride ourselves as standing for policies that promote policies fit for growth, lifestyle freedom, and tech innovation. 

In usual regulatory circumstances, that means protecting consumers’ platform and tech choices  from the zealous hands of regulators and government officials who would otherwise seek to shred basic Internet protections and freedom of speech, as well as break up innovative tech companies. Think Section 230, government jawboning, and consequences of deplatforming.

As such, the antitrust crusades by select politicians and agency heads in the United States and Europe are of primary concern for consumer choice. We have written extensively about this, and better ways forward. Many of these platforms make mistakes and severe errors on content moderation, often in response to regulatory concerns. But that does not invite trust-busting politicians and regulators to meddle with companies that consumers value.

In the background of each of these legislative battles and proposals, however, there is a special example found in the Chinese-owned firm TikTok, today one of the most popular social apps on the planet. 

The Special Case of TikTok

Now owned by Bytedance, TikTok offers a similar user experience to Instagram Reels, Snapchat, or Twitter, but is supercharged by an algorithm that serves up short videos that entice users with constant content that autoloads and scrolls by. Many social phenomena, dances, and memes propagate via TikTok.

In terms of tech innovation and its proprietary algorithm, TikTok is a dime a dozen. There is a reason it is one of the most downloaded apps on mobile devices in virtually every market and language. 

Researchers have already revealed that China’s own domestic version of TikTok, Douyin, restricts content for younger users. Instead of dances and memes, Douyin features science experiments, educational material, and time limits for underage users. TikTok, on the other hand, seems to have a suped-up algorithm that has an ability to better attract, and hook, younger children.

What makes it special for consumer concern beyond the content, however, is its ownership, privacy policies, and  far-too-cozy relationship with the leadership of the Chinese Communist Party, the same party that oversees concentration camps of its Muslim minority and repeatedly quashes human rights across its territories.

It has already been revealed that European users of the TikTok can, and have, had their data accessed by company officials in Beijing. And the same goes for US users. Considering the ownership location and structure, there isn’t much that can be done about this.

Unlike tech companies in liberal democracies, Chinese firms require direct corporate oversight and governance by Chinese Communist Party officials – often military personnel. In the context of a construction company or domestic news publisher, this doesn’t seemingly put consumers in liberal democracies at risk. But a popular tech app downloaded on the phones of hundreds of millions of users? That is a different story.

How best to address TikTok in a way that upholds liberal democratic values

Among liberal democracies, there are a myriad of opinions about how to approach the TikTok beast.

US FCC Commissioner Brendan Carr wants a total ban, much in line with Sen. Josh Hawley’s proposed ban in the U.S. Senate and U.S. Rep. Ken Buck’s similar ban in the House. But there are other ways that would be more in line with liberal democratic values.

One solution we would propose, much in line with the last US administration’s stance, would be a forced divestiture to a U.S.-based entity on national security grounds. This would mean a sale of US assets (or assets in liberal democracies) to an entity based in those countries that would be completely independent of any CCP influence.

In 2019-2020, when President Donald Trump floated this idea, a proposed buyer of TikTok’s U.S. assets would have been Microsoft, and later Oracle. But the deal fell through.

But this solution is not unique.

We have already seen such actions play out with vital companies in the healthcare space, including PatientsLikeMe, which uses sensitive medical data and real-time data to connect patients about their conditions and proposed treatments. 

When the firm was flooded with investments from Chinese partners, the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) ruled that a forced divestiture would have to take place. The same has been applied to a Chinese ownership stake in Holu Hou Energy, a U.S.-subsidiary energy storage company.

In vital matters of energy and popular consumer technology controlled by elements of the Chinese Communist Party, a forced divestiture to a company regulated and overseen by regulators in liberal democratic nations seems to be the most prudent measure.

This has not yet been attempted for a wholly-owned foreign entity active in the US, but we can see why the same concerns apply.

An outright ban or restriction of an app would not pass constitutional muster in the US, and would have chilling effects for future innovation that would reverberate beyond consumer technology.

This is a controversial topic, and one that will require nuanced solutions. Whatever the outcome, we hope consumers will be better off, and that liberal democracies can agree on a common solution that continues to uphold our liberties and choices as consumers.

Yaël Ossowski is the deputy director of the Consumer Choice Center.

Why Democratic Control of the FCC Won’t Bode Well for Internet Freedom

By Yaël Ossowski

Late Tuesday afternoon, President Joe Biden revealed his nominations to the Federal Communications Commission.

As one would expect, his two nominations — Jessica Rosenworcel and Gigi Sohn — come from Democratic circles and have upheld progressive priorities for telecom policies.

Rosenworcel has been a commissioner since 2012 and served as acting chair since Ajit Pai left at the beginning of Biden’s term. She would be the first female chair of the FCC.

Sohn has been active in left-leaning nonprofits, but also worked as a counselor to former FCC chair Tom Wheeler. She has made a career in advocacy, government, and academia championing “open, affordable, and democratic communications networks,” according to the White House release.

What both nominees represent, if confirmed by the Senate, would be a return to a Democratic-majority FCC intent on revitalizing 2015-era “net neutrality” proposals. Activists are already celebrating a return to progressive policymaking at the nation’s telecom regulator.

While Biden’s nominations are no surprise — every president generally nominates commissioners from their own party — consumer advocates should be worried about the policy goals they will seek to pass.

Net Neutrality

The most pressing would be a reform of Title II regulations through “net neutrality”, effectively labeling Internet Service Providers as public utilities, essentially as protected monopolies.

As I wrote in the Washington Examiner in 2017, the basic premise of net neutrality reforms is to regulate ISPs like water suppliers or telephone companies, subjecting them to more active enforcement, standards, and regulations set by the FCC, so that all online traffic be considered “neutral and free from prioritization”.

What’s more, a Title II classification would treat ISPs are monopolies, which even by the most strained definition, cannot be true. There are close to 3,000 ISPs in the United States, all serving different populations and regions, though some players have larger coverage than others.

Sweeping these companies into the regulatory lens of the FCC under the auspices of public utilities would mean more restrictions and regulations on content and delivery of content on the Internet — a far cry from Internet freedom.

As a general principle for an open net, net neutrality is an important one. When internet providers have been accused of unfairly blocking or throttling consumers, they have rightfully been challenged by lawsuits and enforcement actions from the Federal Trade Commission. And we should generally want a system that won’t discriminate against Internet users based on the content they host or provide (we can also thank Section 230 for liability protections for online platforms).

However, since these regulations were proposed in 2014 under the Obama administration, there has never been a clear rationale provided as to why Internet companies should be regulated under the FCC rather than the FTC, as is the status quo. And from what we can tell, that change would likely impact consumers more than anyone.

For one, a public utility classification would mean much far-reaching power of centralized Internet regulation than exists currently, putting the innovative nature of the Internet at risk.

Providers would be tasked with significant regulatory compliance that would necessitate more administrative costs and fees. This would also threaten the expansion of start-ups and independent companies in the digital space, souring the efforts at creative entrepreneurship. All would be harmful to consumers.

With every successive administration in Washington, we can only imagine that enforcement of the rules and changing of the rules would be enough to create regulatory uncertainty for thousands of online businesses and the users who depend on them.

Second, as our experience from the history of public utilities demonstrates, there would likely be intense consolidation that would empower large companies with the means to comply with regulations and stunt innovative new start-ups. It would also disincentivize increased private investment in broadband services, as we have written about at the Consumer Choice Center, and exacerbate the effects of Biden’s infrastructure proposal on public broadband if it passes this fall.

While consolidation of ISPs is a grave concern to progressive Internet activists, this would only be made worse once a giant bureaucracy such as the FCC is given regulatory authority over them. As my colleague Elizabeth Hicks noted in the Detroit Times, often it is state and local regulations that impede greater competition among ISPs, not because of lax authority at the federal level.

Online Privacy

Both Rosenworcel and Sohn have also indicated that they would support a proposal for greater Internet privacy enforced by the FCC. While that would be great on principle, we would hope that a federal plan would punish bad actors and establish clear guidelines to ensure transparency and protect innovation, as we proposed in our data and consumer privacy policy note.

However, Sohn’s previous public statements, including when she was a fellow at the Open Society Foundation, demonstrate she’d want a wholesale restriction on the sharing of data, even among willing consumers and providers. That would put many vital services at risk.

What’s more, such a proposal would likely aim to further empower government enforcement on data privacy rather than embrace market innovations that already do just that.

Prices

Another significant area where a Democratic-majority FCC could seek action would be on the pricing of Internet services. Sohn has been quite vocal about fixing ISP prices and regulating the bundling of various services. This would undermine the competitive environment of ISPs and likely lead to lower quality and rationed services for users, degrading everyone’s Internet experience.

Sohn’s history at various nonprofit groups that have targeted and lobbied the FCC for more enforcement was indeed impactful, and it is not difficult to see how much of the outrage about net neutrality was due to these efforts. Unfortunately, this also coincided with serious death threats and security concerns for commissioners opposed to these plans.

If both nominees to the FCC are confirmed, it is clear that the battle for the open Internet will once again be relitigated. And if the past proposal is any indication, it will face significant opposition.

At the time of the original net neutrality rules, even the Electronic Frontier Foundation, seen as one of the most powerful Internet freedom groups, was skeptical about how far-reaching the net neutrality provisions were.

We can only imagine that now, buoyed by progressive victories on Capitol Hill and louder voices for regulating content and platforms on the Internet, these proposals will prove harmful to the interests of online users and consumers.

Yaël Ossowski is the deputy director of the Consumer Choice Center.

Facebook failures may be real, but the case for increased censorship is weak

Once the so-called Facebook whistleblower revealed her identity and story, it was only a matter of time before the public imagination of one of the largest social networking sites would go off the rails.

What Frances Haugen released to the Wall Street Journal in her initial leaks, which it dubbed the “Facebook Files ,” detailed how Facebook had made decisions on which accounts to censor, survey data on Instagram use among teens, and the status of the civic integrity team tasked with countering misinformation around political topics.

Many of the revelations are fascinating, and some damning, but they point to a company bombarded with external and internal demands to censor accounts and pages that spread “misinformation” and “hateful” content. Who determines what that content is, and what classifies as such, is another point.

In the days since, Haugen has become a hero to critics of the social media giant on both the Right and the Left, animating these arguments before a Senate subcommittee on consumer protection on Tuesday.

It created the perfect theater for Washington lawmakers and media outlets, elevating conjecture, hyperbole, and feverish contempt for an online platform used by billions of users.

Congressional Republicans and Democrats are united in confronting Facebook, though they are animated by different reasons. Generally, Democrats say the platform does not censor enough content and want it to do more, evoking the “interference” in President Donald Trump’s 2016 victory. Republicans, on the other hand, believe the censorship is pointed in the wrong direction, often targeting conservative content creators, and would like to see more even-handedness.

“Facebook has caused and aggravated a lot of pain and profited off the spreading of disinformation, misinformation, and sowing hate,” said committee chairman Sen. Richard Blumenthal, who days before received ridicule for asking Instagram to ban the “finsta” program. (Finstas are fake Instagram accounts created by teenagers to avoid the prying eyes of parents.)

Facebook’s mistakes, especially when it comes to content moderation, are vast. I have joined countless others in pointing out the troubling examples of censorship that are all too often politically motivated. Considering it is a Silicon Valley firm staffed with tens of thousands of employees who likely lean left, it is not surprising.

But the incentive to censor content exists because of the huffing and puffing in Congress, whistleblowers like Haugen, and media pressure to conform to a narrow version of online free speech that has no parallel elsewhere.

Whether it is through the lens of antitrust, to break apart Facebook’s various divisions such as Instagram and WhatsApp, or by reforming Section 230 to make firms liable for all speech on their platforms, it is clear that heavy-handed social media regulation will have the greatest impact on users and generally make Facebook unbearable.

As much as some might like to castigate the unicorn start-up with tens of thousands of employees and a hefty stock price, it derives its power and influence as a platform for billions of individuals looking for connections.

A number of the posts on Facebook may be atrocious or wrong, and they deserved to be called out by those who see them. But in free societies, we prefer to debate bad ideas rather than relegate them to the darkened reaches of society, where they will only fester and grow unabated.

Expecting or forcing Facebook to ramp up censorship will make the platform a de facto arm of our federal agencies rather than a free platform for connecting with friends and family.

While there are many positive reforms that could be invoked in the wake of the Facebook moment, a national privacy and data law, for example, we know it will be the users of these platforms who will ultimately suffer from misguided regulation.

If we believe in free speech and an open internet, it is our responsibility to advocate sane, smart, and effective rules on innovative technologies, not laws or edicts that only seek to punish and restrict what people can say online. We as users and citizens deserve better.

Originally published here

Scroll to top
en_USEN