Social Media

Virginia Governor Youngkin’s All or Nothing Approch to Social Media

For years, one topic has consistently managed to unite lawmakers from both sides of the aisle: social media and its effects on kids. In Virginia, bipartisan momentum led to a workable proposal—one that would have automatically limited screen time for users under 16 unless a parent opts out. Importantly, the bill stopped short of banning algorithmic feeds entirely.

But Governor Glenn Youngkin has since responded with a set of amendments that, while rooted in good intentions, make the bill significantly harder to implement and a lot less practical.

Parents, Not Politicians, Should Call the Shots

At the heart of this debate is a fundamental question: Who decides what’s best for kids? The compromise version of the Consumer Data Protection Act passed with unanimous support in the VA General Assembly and respected a simple but powerful principle—parents, not the government, should be in the driver’s seat when it comes to their children’s screen time.

Youngkin’s proposed changes that balance. He wants the default restrictions to extend until a teen turns 18 and seeks to limit features like auto-play and “instant scrolling,” which are core to how most apps function. But the original bill already handled the issue of overuse with its time limit provisions. Why go further?

Algorithms Aren’t the Enemy

Let’s be honest: the entire draw of apps like Instagram and YouTube lies in their ability to deliver content without users having to dig for it. Take that away, and you fundamentally change what those platforms are. Nobody wants to spend their time on Instagram searching for content. You’re there because you don’t know what you want. “Discovery” is the bulk of the fun and utility of this feature.

Ironically, algorithms are both the problem and the solution to most lawmakers’ concerns about content online. Yes, they can amplify harmful content—but they also help curate safer, more relevant experiences for younger users. A pure chronological feed, while seemingly “neutral,” can serve up just as much inappropriate material.

Parent Tools Already Exist

There’s no shortage of parental control options today. Apple, Google, and Amazon all offer robust tools to limit screen time, filter content, and monitor communication. In our home, we recently transitioned our 14-year-old from a Bark Phone to an iPhone, and we didn’t lose a step in terms of oversight.

From approving app downloads to shutting down unproductive apps, it’s all there. TikTok, Instagram, and others even have their own parental control features (though some require parents to download the app themselves, which is a fair concern for privacy-minded families).

So, What’s the Role of Government?

The compromise Virginia lawmakers reached wasn’t about stripping tech companies of their functionality. It was about nudging parents, many of whom are overwhelmed or unaware of the tools at their fingertips, into taking a more active role.

A one-hour default time limit isn’t a ban. It’s a prompt. It opens the door for conversations within families and encourages parents to set boundaries based on what works for them.

Don’t Forfeit a Compromise That Worked

Youngkin’s edits aim for a much broader and more restrictive approach, at the risk of throwing out a practical, consensus-driven solution developed by Virginia state delegates. What Virginia had was a smart, flexible policy that acknowledged both the value and the risks of social media.

Yes, regulation has a role here, like in any consumer-facing industry—but it should aim to empower families, not replace them. Virginia’s original bill struck that balance. Youngkin’s return to square one makes an already challenging bill for consumer choice even more fraught.

The Consumer Choice Center is an independent, nonpartisan consumer advocacy group championing the benefits of freedom of choice, innovation, and abundance in everyday life for consumers in over 100 countries. We closely monitor regulatory trends in Washington, Brussels, Ottawa, Brasilia, London, and Geneva. Find out more at www.consumerchoicecenter.org.


INTERDICTION DU « INFINITE SCROLL » SUR LES RÉSEAUX SOCIAUX : DU PATERNALISME PUR ET DUR ?

Face à l’addiction numérique qui gagne du terrain, les législateurs européens envisagent l’interdiction de certaines fonctions « addictives » des réseaux sociaux.

Il y a environ un an, le Parlement européen s’est à nouveau penché sur la question désormais courante de la dépendance des réseaux sociaux, affirmant que des fonctions telles que le « infinite scroll » et la lecture automatique des vidéos sont responsables de l’accaparement des utilisateurs par leurs applications.

En octobre de l’année dernière, le site Internet du Parlement indiquait :

« Alors que les réseaux sociaux peuvent affecter la société de manière positive (par exemple en augmentant l’efficacité, l’accessibilité, la connectivité), leur conception addictive peut causer des dommages physiques, psychologiques et matériels (perte de concentration et de capacité cognitive, épuisement professionnel, stress, dépression, limitation de l’activité physique). Les députés sont particulièrement inquiets de l’impact de la dépendance numérique sur les enfants et les adolescents, qui sont plus vulnérables à ces symptômes, et ils appellent à davantage de recherche et de réglementation dans ce domaine. »

Cet été, la présidente de la Commission européenne, Ursula Von der Leyen, a dit que son « coeur saignait » à propos des jeunes adultes qui s’automutilent à cause des abus en ligne et a promis de s’attaquer à la cyberintimidation et à la conception addictive des plateformes de réseaux sociaux.

Comme c’est souvent le cas, de nombreuses questions sont mélangées en un seul appel à l’action réglementaire.

La cyberintimidation, ou toute autre forme d’intimidation d’ailleurs, remonte à une époque antérieure à l’existence des smartphones et des réseaux sociaux. Toute personne ayant fréquenté l’école pourra en témoigner, même si nous n’avons jamais conclu que l’abandon de l’école était un remède efficace contre les abus commis dans ses murs. Dès que les jeunes adultes ont eu accès à la messagerie instantanée, bien avant que Facebook n’existe, ils se sont mis à colporter des ragots et à laisser des remarques haineuses, tout comme les adultes plus âgés qui font la même chose sur leur ordinateur au travail par le biais du courrier électronique ou autour de la machine à café de leur immeuble de bureaux.

Si l’objectif de la Commission et du Parlement est de mettre fin au harcèlement, ils auront besoin d’un plan plus ambitieux que de mettre fin aux fonctions de lecture automatique des vidéos sur Instagram.

Pour les besoins de la discussion, définissons les termes.

La lecture automatique fait référence au fait que les vidéos sur les plateformes de réseaux sociaux sont lues automatiquement en boucle sans que l’utilisateur n’en prenne l’initiative. Cette fonction existe sur X et peut être activée sur TikTok. Des plateformes telles qu’Instagram et YouTube ont mis fin à la lecture automatique des vidéos et demandent aux utilisateurs de faire défiler ou de cliquer sur la vidéo suivante. Le « infinite scroll » – parfois associé à des vidéos, mais pas uniquement – signifie que les utilisateurs peuvent essentiellement passer un temps infini sur la plateforme pour obtenir du nouveau contenu. En ce sens, il n’y a pas de « fin » à la quantité de contenu qu’ils peuvent voir (une réalité de l’internet dont on pourrait penser que les décideurs politiques sont désormais conscients).

Le facteur important à prendre en compte ici est que la demande de régulation de ces fonctionnalités se traduit essentiellement par la volonté de Meta et d’autres de voir les utilisateurs cesser d’utiliser leur plateforme au-delà d’un certain nombre de temps alloué.

Il s’agit là d’une demande très étrange à l’égard d’une entreprise. Imaginez la scène où l’on demande à IKEA de raccourcir le passage dans son magasin parce qu’il est conçu pour inciter les gens à acheter plus de meubles, où l’on demande à un centre commercial d’aligner les escaliers roulants de manière à ce que les gens quittent le magasin plus tôt, ou à une boîte de nuit de jouer de la musique moins bonne pour que les visiteurs partent plus tôt.

Depuis l’apparition des lieux commerciaux, les entreprises ont essayé de garder les clients dans leurs locaux et sur leurs sites web. De nombreuses chaînes de télévision sont à l’antenne depuis plusieurs décennies, avec de la publicité, des contenus en lecture automatique et des annonces de films à venir, dans le but de garder le public accroché. Demander à une entreprise de faire quelque chose qui n’est pas dans l’intérêt de son modèle économique est pour le moins étrange, et étant donné que les législateurs pensent à un temps qu’il est jugé approprié de passer en ligne, c’est dystopique au possible.

Pour les consommateurs, la législation sur le « infinite scroll » et la lecture automatique signifie une réduction du choix. Ceux qui ne veulent pas de lecture automatique des vidéos et qui souhaitent une limite au défilement peuvent activer les fonctions des applications qui leur permettent de le faire ou simplement cesser d’utiliser les applications en question. De la même manière que nos opérateurs téléphoniques ne coupent pas nos conversations téléphoniques parce qu’elles ont été jugées trop longues, et que nos téléviseurs ne s’arrêtent pas parce que nous en sommes à la troisième relecture de la cinquième saison de Dr House, nous n’avons pas non plus besoin de paternalistes qui nous disent combien de temps nous devrions passer sur Facebook.

Les préoccupations relatives à la santé mentale des jeunes adultes doivent être prises au sérieux, mais les règles en question ne s’attaqueraient pas à ce problème et ne le traitent pas sobrement. Certains problèmes sociaux sont difficiles à résoudre, et la réglementation des caractéristiques des réseaux sociaux n’est rien d’autre qu’une solution performative.

Originally published here

DOJ vs. Google: An Insult To Consumers

October 10, 2024, WASHINGTON, DC – This week, the legal team representing the Department of Justice and several state attorneys general filed a preliminary “remedy framework” in their case against the search giant Google, following an August ruling by Judge Amit P. Mehta’s which erroneously declared the American company a “monopolist”.

The proposed remedies attack Google’s past, present, and future by:

  • Restricting Google’s ability to make third-party arrangements for its search and web browser products.
  • Limiting Google’s ability to cross-promote its own products such as Google Gemini (generative AI) on Chrome, Android, and the Google Play Store.
  • Exploring ways to force Google to craft educational campaigns that inform consumers of alternative search engines.
  • Opening Google’s vast data archive to researchers, educators, and competitors.
  • Cutting off Google’s budding AI division utilizing data within its search products to train AI and service consumers with high-quality results.

Yaël Ossowski, deputy director of the Consumer Choice Center, criticized the government’s bullet-point plan to break up the search company, “Imagine after the rise of Facebook, the DOJ comes in and forces the most popular social media app in the world to educate its users about alternatives, Myspace and Google+. It would have been laughable. That’s part of the government’s plan for Google, and it’s an all-out assault on consumer preference and choice. It’s a total insult to consumers.”

Google, according to  Assistant Attorney General for Antitrust, Jonathan Kanter, has set up a self-preferential ecosystem of apps and technology that limit competition. Prior to his role in the Biden DOJ, Kanter represented Microsoft, Yelp, and other competitors to Google.

“The truth is that consumers choose their search engine based on convenience and the quality of results. DOJ’s plans to restrict Google’s ability to enter into product partnerships, as well as halting their AI investments, does nothing but slow down the consumer experience,” continued Ossowski.

In August, the Consumer Choice Center was quoted by the Associated Press after the judge’s ruling, saying “The United States is drifting toward the anti-tech posture of the European Union, a part of the world that makes almost nothing and penalizes successful American companies for their popularity.

The proposed remedy plan is only the first step in the federal government’s recommendations to the judge, but it will ultimately be the court that decides whether these terms are viable and necessary regarding Google. 

Yaël Ossowski concluded, “While the government unloads on Google, the competitive world of both closed and open-source Large Language Models is growing exponentially and expanding the market for artificial intelligence apps. Google already faces substantial competition as AI firms reshape the landscape of online search results. The government is using its power to tilt the scales of innovation in a direction it likes, depriving consumers of the effective free tools Google has provided for years.”

The Consumer Choice Center is taken aback by this insult to consumers being advanced by the U.S. Department of Justice. Competition is vital in the technology and AI sector, but the DOJ’s remedy proposal reflects an overstep of government authority and a disregard for the principle of consumer welfare.

“‘Google‘ is a verb because the products and tech ecosystem work for consumers exactly how they want and expect. If that ever stopped being the case, Google’s competitors wouldn’t seek government assistance in order to boost their market share. Jamming up Google, both now and in the future, is exactly what’s going on here and consumers should be outraged,” concluded Ossowski.

The New TikTok Lawsuit Targets All Social Media App Experiences

Over a dozen states are suing TikTok, according to news reports breaking today, in a fresh bipartisan move against the massively popular social media app. This collection of lawsuits goes after TikTok’s user experience, alleging that the company misled the American public over the app’s impact on youth mental health outcomes and addictive behavior. 

Stephen Kent, media director of the Consumer Choice Center, reacted with skepticism about the new effort to target TikTok, “TikTok has an ownership problem, not a features problem. We’ve been highly critical of TikTok’s ownership structure and supportive of the federal effort to force ByteDance Ltd. to divest its majority stake in the app for the sake of user’s online security and privacy. This lawsuit is something different, and the ultimate target is, in fact, all social media firms that consumers enjoy.”

The lawsuits take issue with TikTok’s most notable features, including autoplay, “beauty” filters, and push notifications. Similar efforts have been aimed at Meta in October 2023.

Stephen Kent continued, “Read over these lawsuits and you’ll see that TikTok could be removed from the text and replaced with almost any other popular social media app. This effort is indicative of a legislative panic over algorithms and customized user experiences and would lead us to a one-size-fits-all future in which consumers’ online experiences are all alike. TikTok is popular precisely because its technology is so powerful at figuring out the likes and dislikes of the user. No one wants to be on an app where they hate everything they see. These lawsuits are antithetical to consumer choice online.”

The Consumer Choice Center encourages the process of divestiture to go forward in federal court and for ByteDance to do the right thing for its users by allowing TikTok to be operated by an entity with independence from the Chinese Communist Party (CCP). The right approach is for social media firms to be accountable to the consumers they serve, and TikTok cannot do that with its current connection to the Chinese government.

Read more from the Consumer Choice Center: Don’t Co-Parent with Congress (Reason Magazine, Yahoo! News)

“Parents who are concerned about their children’s online behaviors and exposure to harmful content can take action today by adopting alternative smartphone technology that helps them moderate their child’s online experience. I have spoken at length about the perks of the Bark Phone, Gabb, Troomi, and Pinwheel phones, as alternatives to government action. There’s a robust market for family-friendly tech experiences and consumers don’t have to wait on courtrooms or lawmakers to help their children navigate social media more safely,” concluded Kent.

Social media needs fresh thinking, not warning labels

Surgeon General Vivek Murthy dropped a bombshell into the national debate over social media regulation on Wednesday with an opinion piece calling for Congress to slap health warning labels on social media apps. This marks a seismic shift in the federal government’s souring attitude toward social media at a time when states are passing their own laws on social media algorithms and app features aimed at protecting minors online.

Congress should not take up the surgeon general’s call to label social media like cigarettes and alcohol. Social media clearly affects the lives and development of young people in many ways, but the expansion of warning labels into the realm of mental health outcomes online is both subjective and politically loaded. 

Murthy’s call to action states that “social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe.” 

This line alone raises some serious questions about the angle the nation’s top doctor is taking to assess which products warrant warning labels. A product of any kind being “proven safe” is different from being “proven hazardous.” It’s the same framework as “innocent until proven guilty” versus the other way around. 

The conclusions of Murthy align with author Jonathan Haidt, whose book The Anxious Generation has been garnering national attention since its April release. Both agree that Washington “can’t wait for certainty” when it comes to putting warning labels on social media apps. 

Haidt and Murthy both skirt recommendations on what a warning label would look like when it comes to social media and what apps or platforms would qualify. Various social media regulation proposals in Congress have directed regulation toward platforms with particularly large user bases while exempting smaller players. Others have created carve-outs for apps driven by direct messaging features, creating room for hybrid social messaging apps such as Snapchat to avoid regulation affecting their competitors. 

Will the label go on the app’s logo on your device’s home page? Will the label appear each time you open the app or only once upon creating an account? The surgeon general seems to leave this up to Congress to decide, along with any metrics for what platforms qualify as the social media responsible for poor mental health in youth. 

It’s not an insignificant question. Does Discord count as social media, or are Facebook and Instagram the design standard by which Congress would legislate labels on these apps? It’s doubtful that Congress will ignore the political subculture of different apps when considering which ones it finds harmful to public health. 

Congressional Democrats did not come around to forcing ByteDance’s sale of TikTok in the United States based on the effect of the app on its users’ mental health, which is suspect, but instead were motivated by national security concerns. 

When you remove the designs of various social media platforms as well as algorithms, you’re left with platforms that simply connect people to each other. There is real cause for concern that this could serve as the metric for “social media,” lumping TikTok, Pinterest, WhatsApp, X, and LinkedIn all into the same category. To avoid the warning label, tech companies would offer fewer unique design features and curated experiences using algorithms. 

You could also see a future in which social media companies accept the warning label so that they can treat it as a new cost of doing business and be shielded from any future liability over harm done to users. The government-imposed label creates a shield for the firms and does little to inform parents of children using social media more than they already know. 

There is little doubt to regular users of social media that these apps create certain levels of stress and anxiety that weren’t showing up in mass before 2010 when social media went mainstream. For parents and educators, the distraction and addictiveness social media poses to children are already well-known points of concern. A warning label is not going to change the dynamic. It will, however, stumble into political favoritism and bias based on whose constituents prefer which apps. 

Consider the political culture of TikTok, the elephant in the room for this conversation. Are Democrats and Republicans ready to have a candid conversation about which mental health trends, specifically, they find so worrisome on these social media platforms? 

Social media is not a public health equivalent to smoking cigarettes. Fresh thinking on these challenges is what consumers need, not “copy and paste” strategies from the 1960s. 

Originally published here

‘Kids Online Safety Act’ is a Trojan Horse For Digital Censorship

Washington, D.C. – This week, a bipartisan cohort of US Senators unveiled a new version of the Kids Online Safety Act, a bill that aims to impose various restrictions and requirements on technology platforms used by both adults and minors.

Yaël Ossowski, deputy director of the Consumer Choice Center, a consumer advocacy group based in Washington, D.C. responded: 

“This bill is constitutionally dubious and would create new powers that should frighten not only every parent but also every user of digital platforms such as social media. In writing new federal rules to “protect” kids online, the real effect will be to significantly degrade the experience for all users while putting their sensitive personal information at risk.”

The Consumer Choice Center believes strongly that if Congress were to pass such a bill, lawmakers would be aligning with the idea that the government should have the final say over young people’s access to the Internet, thus diminishing the role of parents in their kids’ lives. 

“There are ways to protect kids online, but that begins at home with parental authority and supervision. It’s a false choice to accept the gatekeeping of an entire generation from technology that has become so integral to daily life and contributes to their development as responsible citizens,” added Ossowski. 

Privacy and consumer advocates are sounding the alarm about what this law would mean in practice. Rules emanating from Washington granting “duty of care” to government officials will erode parental authority and consumer choice online. The bill seeks to control “design features” and limit developers’ inclusion of personalized recommendation systems, notifications, appearance-altering filters, and in-game purchases for apps used by minors. It’s a crackdown not just on features that work functionally for certain apps, but also on features that make them fun for users.

“KOSA is fundamentally wrong,” concluded Ossowski. “We as a society should trust that parents have the ultimate right to decide whether or not their children access certain websites or services, not indifferent government officials sitting in Washington. No one knows what is in the best interests of their child more than parents.”  

Media inquiries and interview requests can be sent to Media Director Stephen Kent: Stephen@consumerchoicecenter.org

***

The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Washington, D.C., Ottawa, Brussels, Geneva, and other hotspots of regulation and inform and activate consumers to fight for  Consumer Choice. Learn more at consumerchoicecenter.org.

en_USEN

Follow us

WASHINGTON

712 H St NE PMB 94982
Washington, DC 20002

BRUSSELS

Rond Point Schuman 6, Box 5 Brussels, 1040, Belgium

LONDON

Golden Cross House, 8 Duncannon Street
London, WC2N 4JF, UK

KUALA LUMPUR

Block D, Platinum Sentral, Jalan Stesen Sentral 2, Level 3 - 5 Kuala Lumpur, 50470, Malaysia

OTTAWA

718-170 Laurier Ave W Ottawa, ON K1P 5V5

© COPYRIGHT 2025, CONSUMER CHOICE CENTER

Also from the Consumer Choice Center: ConsumerChamps.EU | FreeTrade4us.org