fbpx

Technology

The EU’s AI ACT will stifle innovation and won’t become a global standard

February 5, 2024 – On February 2, the European Union’s ambassadors green lit the Artificial Intelligence Act (AI Act). Next week, the Internal Market and Civil Liberties committees will decide its fate, while the European Parliament is expected to cast their vote in plenary session either in March or April. 

The European Commission addressed a plethora of criticism on the AI Act’s potential to stifle innovation in the EU by presenting an AI Innovation package for startups and SMEs. It includes EU’s investment in supercomputers, statements on Horizon Europe and Digital Europe programs investing up to €4 billion until 2027, establishment of a new coordination body – AI Office – within the European Commission.

Egle Markeviciute, Head of Digital and Innovation Policies at the Consumer Choice Center, responds:

“Innovation requires not only good science, business and science cooperation, talent, regulatory predictability, access to finance, but one of the most motivating and special elements – room and tolerance for experimentation and risk. The AI Act is likely to stifle the private sector’s ability to innovate by moving their focus to extensive compliance lists and allowing only ‘controlled innovation’ via regulatory sandboxes which allow experimentation in a vacuum for up to 6 months,” said Markeviciute. 

“Controlled innovation produces controlled results – or lack thereof. It seems that instead of leaving regulatory space for innovation, the EU once again focuses on compensating this loss in monetary form. There will never be enough money to compensate for freedom to act and freedom to innovate,” she added.

“The European Union’s AI Act will be considered a success only if it becomes a global standard. So far, it does not seem the world is planning on following in the EU’s footsteps.”

Yaël Ossowski, deputy director of the Consumer Choice Center, adds additional context:

“Despite optimistic belief in the ‘Brussels effect’, the AI Act has not yet resonated with the world. South Korea will focus on the G7 Hiroshima process instead of the AI Act. Singapore, the Philippines, and the United Kingdom have openly expressed concern that imperative AI regulations at this stage can stifle innovation. US President Biden issued an AI Executive Order on the use of AI back in October of 2023, yet the US approach seems to be less restrictive and relies upon federal agency rules,” said Ossowski.

“Even China – a champion of state involvement in both individual and business practices is yet to finalize its AI Law in 2024 and is unlikely to be strict with AI companies compliance due to their ambition in terms of global AI race. In this context, we have to acknowledge that the EU has to adhere to already existing frameworks for AI regulation, not the other way around,” concluded Ossowski.

The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Ottawa, Washington, Brussels, Geneva, Lima, Brasilia, and other hotspots of regulation and inform and activate consumers to fight for #ConsumerChoice. Learn more at consumerchoicecenter.org.

Let Apple be Apple — consumers don’t need DOJ intervention 

Apple is a lifestyle brand. The $2.8 trillion company, founded by Ronald Wayne, Steve Wozniak and Steve Jobs, is known to the world as an innovator in consumer technology, but using Apple products is widely seen as a lifestyle choice embraced by consumers. 

I’m an Apple guy. My devices are all synced, from the iPhone to the Macbook Pro, the Apple Watch and the HomePod mini. No one coerced me into this way of living, but that hasn’t stopped the U.S. Department of Justice (DOJ) from investigating Apple and concocting yet another vast antitrust case against an American company.  

As of today, President Biden’s Federal Trade Commission (FTC) has taken Amazon and Meta to court over alleged anti-competitive practices, and the DOJ has hit Google with two antitrust suits targeting Google Search and their ad services. According to The New York Times, the DOJ is still calculating whether or not to bring its multipronged antitrust complaint against Apple.  

What stands out in the Times’s report on the investigation is that it reads like Apple’s competitors are behind the steering wheel of their very own government agency. David McCabe and Tripp Mickle write, “Rivals have said that they have been denied access to key Apple features, like the Siri virtual assistant, prompting them to argue the practices are anticompetitive.”  

Imagine the classroom slacker making the case to the teacher that the straight-A student in the front of the class is being anti-competitive by not sharing their lecture notes with them.  

It’s one thing to maliciously penalize or seek to inconvenience consumers for having a mixed assortment of technology from Apple, LG, Samsung, Nokia and Google. It’s another thing entirely for the government to say that Apple has to design its products for Samsung to piggyback on and then offer to their loyal customers as a perk of not doing business with Apple. Investigators are spending taxpayer dollars to find out why the Apple Watch works more smoothly with the iPhone than with rival brands.  

Does the DOJ work for Samsung or the American people?  

This mindset is exactly what went wrong in court for FTC chair Lina Khan when she threw the once-relevant consumer protection agency between the Microsoft and Activision-Blizzard merger, a case that District Court Judge Jacqueline Scott Corley indicated seemed to be a benefit to Sony, a Japanese firm, more than American consumers. 

None of this is to say Apple is a perfect company, or that it’s behaved like a free enterprise angel throughout every aspect of its business. It hasn’t. Its long-time reliance on manufacturing and investments in China, and how that steers its business, is a big one. But that Apple makes intentionally integrated products that foster brand loyalty and consumer satisfaction is special in the landscape of American tech. Apple is a seamless experience for consumers like myself who are not huge techies, but rather novices who place a premium on convenience and ease of use. 

The reality for Apple is that it operates in a global marketplace with different rules of the road on almost every continent. The European Union is very close to forcing open Apple’s App Store model to allow for third-party app stores on their devices, a provision of the 2022 Digital Markets Act. The EU has also directed its regulatory energies on requiring device manufacturers to have a universal charging port, further removing design distinctions between major tech brands.  

In the United States, Apple narrowly fended off the maker of Fortnite, Epic Games, in a high-profile lawsuit contending Apple held an unfair monopoly over payment processing for in-app purchases. The case failed when the courts correctly acknowledged that Apple does not hold a monopoly in the mobile games market. 

Tech firms may all be united in that they are the target of never-before-seen political scrutiny in Washington, but they are still competitors. You can see this in how they fight government regulation of their business with one hand, and request government help in slowing down their competition with the other. 

Meta reportedly “encouraged” the Justice Department to look into Apple’s new consumer privacy tool, App Tracking Transparency, which empowers iPhone owners to customize and cut off data collection by advertisers of their choosing. It is not a coincidence that Meta anticipates a $10 billion loss in revenue from this useful tool Apple designed for consumers concerned with privacy.  

None of this is new. Successful companies and established industries have always sought to use the federal government as both a cudgel and a shield to protect their interests. For those of us chiefly concerned with consumer satisfaction and welfare, there is no temptation to choose winners and losers in the market.  

Let Apple be Apple, and let consumers choose.  

Originally published here

Florida Youth Deserve Better Than Gatekeeping of Social Apps

Jan 22, 2024

Dear State Representatives and Senators,

As a consumer advocacy group engaged on a wide range of digital issues including privacy and technological innovation, representing both our members and consumers, we implore you to consider another path when it comes to protecting Florida youth online, specifically HB1.

In its current form, the law would be the most draconian age-verification process for online platforms in the nation, barring all users under the age of 16 who want to use specific social media platforms regardless of parental consent or preferences for their child’s online presence. 

This process would also require select social media companies to collect sensitive personal information that we do not believe should ever be in the possession of any private entities by government mandate. This is ripe for future abuse as well as data security threats that could carry real harm to young people beginning their lives online. It will be a pandora’s box of epic proportions.

What’s more, the law makes overly broad exceptions for apps that can demonstrate a “predominate” use case for private messaging services. There are better ways to approach this, such as specifying digital services that focus exclusively on messaging. The state of Florida would be creating an uneven playing field, choosing winners and losers in the social media space, and privileging certain apps arbitrarily based on what function consumers utilize most. 

A solution that better respects parental rights, defends American innovation, and allows online consumers and their parents to choose digital apps freely would not only be more adequate, but would also allow the best private sector solutions to emerge organically. 

Parents should not have their authority and decision-making power usurped by state law or institutions, no matter how noble the cause. Rather than gatekeeping an entire generation from enjoying social connections online, we implore you to provide another solution that works for parents, young online consumers, and the American tech innovators who provide value for each and every one of us in our daily lives.

In a free country with a vibrant competitive marketplace, we will lose our global competitive edge if an entire generation is kept from the keyboard and the online global village. The Consumer Choice Center trusts parents to make the right call for their kids under 16 when it comes to social media activity. We hope you will too. 

Sincerely yours,

Yaël Ossowski

Deputy Director, Consumer Choice Center

Submission to the National Telecommunications and Information Administration on Kids Online Health and Safety

Submission to the National Telecommunications and Information Administration on Kids Online Health and Safety

We hereby submit these comments to better inform and educate the Task Force on Kids Online Health & Safety on the pressing issues of keeping kids safe online while remaining steadfast to the open, innovative nature of digital technologies such as the Internet.

  1. The Role of Technological Solutions

As a consumer advocacy group that champions tech innovation and consumer choice, we believe wholeheartedly that, where necessary, technological solutions should be a principal alternative to restrictive regulation that will impose direct and indirect costs and create barriers to online information and connection.

With many social situations or platforms, we know that there exists much concern about young people, teens especially, and their behavior online. There has been a constant barrage of academic research, political proposals, and messaging campaigns that center on restricting parts of online life to young people for their safety.

While there is a definitive trend as to the framing of social media use as negative for young people, the existing research is much more nuanced and likely more balanced when we consider the benefits.

A 2022 study in Current Psychology found that in classifying users into 3 categories: active, passive, and average use of social media, each documented benefits that outweigh potential harms, even more so for the larger category of “average” users.

For every media outrage story about questionable online content or behavior, there are dozens more unreported of improved social well-being, more social connection, and genuine happiness, especially among young people. This is especially true because, for the most part, teens and young people have gravitated from purely physical social lives to a hybrid social life online as well, unlocking new opportunities to explore, learn, and expand their knowledge and understanding.

This was also admitted by the American Psychological Association, which this year published its own recommendations for parents of teens to monitor online safety.

The solutions offered by the APA and several partner organizations are important, and likely do have merit and efficacy with young people online. Contrasting with many proposals existing in legislation, these recommendations are to be overseen and executed by parents and communities, and would negate the need for punitive measures issued by governments. 

We believe this is an important factor for any remedy affecting online safety for teens and young adults. Voluntary measures, whether that be parental screening, communication, or oversight, when used in conjunction with technological tools, will have a more balanced and effective result than any government-imposed restriction.

Parental screening of application downloads, online profiles, and general education about behavior and content online has thus far proven to be the most measured approach to kid safety online, and it should continue to be.

  1. The Wrong Path of State Intervention

Proposals that lead to agency or government intervention into these efforts, we believe, would do more harm than good.

As we have seen in several state proposals in Texas, Louisiana, and Arkansas, preemptively limiting youth access to online social media use not only elicits legal questions, but also severely restricts the ability for young people to explore the benefits of online platforms and networks.

These proposals have been akin to a labyrinth of weaponized policies that prevent teens from engaging with friends and family online, would burden future social media upstarts, and would lead to worse precedents that put free speech on the Internet at risk, as well as leading to significant hacker exploits.

Proposals such as the now enjoined SB396 in Arkansas make it more difficult for young people to begin to use the Internet and all the benefits it provides, but it also enshrined into law the idea that governments should pick which social media networks young people can or cannot use rather than parents.

We believe this is paternalistic, sets a terrible precedent for online speech and access, and amounts to nothing more than heavy-handed government control of who is allowed online and when.

It elicits the question of whether the final arbiter of whether young people access the Internet at all, and that parents should have diminished say in their kids’ digital lives. We believe that is fundamentally wrong. 

Unfortunately, we see in these legislative attempts few good-willed efforts at remedying online safety concerns, and instead legislative retribution against certain social media companies based on political persuasion.

What’s more, many of these proposed solutions would likely create more substantive harm from digital exploitation of information and data than current voluntary or technological tools available to parents.

These proposals, including federal proposals from the US Senate such as the Kids Online Safety Act, require social media websites to collect sensitive photos, IDs, and documentation of minors, mandating enormous privacy risks that will be a cyberhacker’s dream.

We believe that as a society, we should trust that parents have the ultimate right to decide whether or not their children access certain websites or services, and that those decisions are not overruled by legislative proposals.

  1. The answer is technology

As we have stated, and as the research demonstrates, there are immense benefits to social media that are practiced and explored each and every day for people of any age category.

Whether it be for creative purposes, democratic expression, social connection, commerce and business, or education, there are a myriad of benefits to social media that, when paired with responsible adult supervision and guidance, will continue to be a positive force for society as a whole.

Where necessary, when parents and communities can implement technological solutions that help improve the benefits of social media use – whether it be in voluntary parental filters, download authorization, or educational materials – this will be the best and most effective method for protecting young people online. Keeping the Internet as an open ecosystem for exploration, learning, and connection will bring many more benefits to the next generation than restrictive bans or limits imposed by law. 

We hope your commission will take these points to heart, and will continue to advocate for responsible use of technology and the Internet for young people and their parents.

Link to the PDF

Biden’s AI “Collaboration” With Europe Will Hurt Innovation

Last week, President Joe Biden unveiled an executive order that marks the beginning of a U.S. regulatory path for artificial intelligence. The order is a prelude to forming a U.S. AI Safety Institute, housed within the Department of Commerce—announced by Vice President Kamala Harris in the UK last week. This period of “close collaboration” with the UK and EU is a considerable threat to decades of American leadership in tech.

Rather than embracing traditional hallmarks of American innovation, the Biden administration seems intent on importing some of the worst aspects of Europe’s fear-driven and burdensome regulatory regime. If the current approach continues, AI innovation will be smothered, overly surveilled, and treated as guilty until proven innocent. 

Two distinct worlds are taking shape on each side of the Atlantic regarding the future of artificial intelligence and its benefits.

The first is one with cutting-edge competition between large language model developers, open-source software coders, and investors tooling the best practical applications for AI. This comprises ambitious startups, legacy Big Tech companies, and every major global corporation looking for an edge. As anyone can guess, a high percentage of early movers in this category are based in the United States, with close to 5,000 AI startups and $249 billion in private investment. This space is hopeful, energetic, and forward-looking.

The second world, languishing behind the first, is characterized by bureaucracy, intense approval processes, and permitting. The predominant mindset around AI is threat mitigation and a fixation on worst-case scenarios from which consumers must be saved. 

Europe is that second world, guided by the nervous hand of its Commissioner for Internal Market, Thierry Breton, a key foe of American tech firms. Breton is the face of two sweeping digital EU lawsthat place additional burdens on tech firms hoping to reach European consumers. 

On AI, Breton’s distinctly European approach is entirely risk and compliance-based. It requires that generative AI products, such as images or videos, are slapped with labels, and specific applications must undergo a rigorous registration process to determine whether the risk is unacceptable, high, limited, or minimal.

This process will prove restrictive to an AI industry that is constantly changing and ensure that tech incumbents will have a compliance advantage. EU regulators are accustomed to dealing with the likes of Meta and Google and have established some precedent for subordinating these high-flying American companies. 

It’s a convoluted system that EU bureaucrats are happy to champion. They adopt burdensome rules before the industries even exist, with the hope of maintaining a certain status quo. As a result, Europe lags far behind the investment and innovation taking place in the United States and even China. 

At present, the United States hosts a significant portion of the AI industry—whether it be Meta and Microsoft’s open-source large language model known as LLAMA, OpenAI’s Chat-GPT and DALL-E products, as well as Midjourney and Stable Diffusion. This is not a fluke or bug in the international order of tech innovation. America has a specific ethos around entrepreneurial risk-taking, and its regulatory approach has historically been reactive.

While President Biden could have taken that as a signal that a light touch is needed, he has instead taken the European route of “command and control,” a way that may prove even more expansive.

For instance, Biden’s executive order invokes the Defense Production Act, a wartime law designed to help bolster the American homefront in the face of grave outside threats. Is AI already classified as a threat?

Using the DPA, Biden requires that all companies creating AI models must “notify the federal government when training the model, and must share the results of all red-team safety tests.” Like the European risk system, this means firms will have to constantly update and comply with regulators’ demands to ensure safety.

More than increasing compliance costs, this would effectively lock out many startups who wouldn’t have the resources to report how they’re using models. Larger, more cooperative firms would swoop in to buy them out, which may be the point.

Andrew Ng, a co-founder of Google’s early AI project, recently told the Australian Financial Review that many incumbent AI companies are “creating fear of AI leading to human extinction” to dominate the market by directing regulation to keep out competitors. Biden appears to have bought that line.

Another aspect that threatens existing development is that all firms creating models must report their “ownership and possession.” Considering Meta’s LLAMA, the largest model produced thus far is written as open-source software, it is difficult to see how this could be enacted. This puts the open-source nature of much of the early AI ecosystem in jeopardy.

Is any of this truly necessary? Singapore, which has a nascent but rising AI industry, has opted for a hands-off approach to ensure innovators create value first. In the early days of Silicon Valley, this was the mantra that turned the Bay Area into a global beacon for tech innovation. 

This impetus to regulate is understandable and follows Biden’s ideology. But if Washington takes the Brussels approach, as it seems to be doing now, it will risk innovation, competition, and the hundreds of billions in existing AI investments. And it could be precisely what the incumbent big players want.

Congress should step up and rebuff Biden’s “phone and pen” approach to regulating a growing industry. 

To ensure American leadership on AI, we must embrace what makes America unique to the innovators, explorers, and dreamers of the world: a risk-taking environment grounded in free speech and creativity that has delivered untold wealth and surplus value for consumers. Taking our cues from European superregulators and tech-pessimists is a risk we can’t afford.

Originally published here

Gene-Editing Breakthrough: Revolutionizing Sickle Cell Treatment

In the realm of medical science, groundbreaking innovations are constantly reshaping the landscape of healthcare. One such marvel that has recently come to the forefront is the revolutionary gene-editing technology, CRISPR, poised to transform the lives of those suffering from debilitating genetic disorders. In a significant stride towards a potential cure, independent experts are set to evaluate a pioneering treatment designed to edit the genes of patients afflicted with sickle cell disease.

Sickle cell disease, a genetic disorder affecting approximately 100,000 individuals in the United States, primarily among people of color, has long been a challenge for both patients and medical professionals. The condition leads to deformed red blood cells, causing complications such as extreme fatigue, blood vessel blockages, and excruciating pain, significantly reducing the life expectancy of those affected. Traditional treatments, including stem cell transfusions, offer relief from symptoms but fail to address the underlying cause of the disease.

Vertex Pharmaceuticals and CRISPR Therapeutics have collaborated on a pioneering therapy that harnesses the power of CRISPR technology. This groundbreaking treatment aims to modify the stem cells of individuals suffering from sickle cell disease, potentially offering a cure that was once deemed unattainable. The therapy’s developers believe that the data amassed so far not only showcases its potential as a cure but also paves the way for a new era of gene-editing treatments.

At the heart of this medical marvel lies CRISPR, a gene-editing technique that holds the promise of precision medicine. By modifying the targeted genes responsible for sickle cell disease, CRISPR technology presents hope for patients who have long endured the limitations of existing treatments. The potential of this therapy to alleviate the debilitating symptoms of sickle cell disease, such as painful blood vessel blockages, has been demonstrated in late-stage trials. Remarkably, 29 out of 30 participants who received the treatment did not experience severe, painful blockages necessitating hospitalization for an entire year.

The significance of this innovation extends far beyond the realm of sickle cell disease. It represents a historic moment for CRISPR technology, showcasing its potential to revolutionize the treatment landscape for various genetic disorders. What sets this therapy apart is its ability to address the root cause of the disease, offering transformative possibilities for patients who previously had limited effective treatment options.

Despite the immense potential, the therapy’s approval is not without its challenges. The expert panel evaluating the treatment will scrutinize not only its efficacy but also the technology’s precision. Ensuring that CRISPR technology edits only the targeted genes is paramount, as off-target editing could lead to unintended consequences. To address these concerns, Vertex Pharmaceuticals and CRISPR Therapeutics are rigorously assessing their data and conducting comprehensive analyses to demonstrate the therapy’s safety and accuracy.

Moreover, the affordability and accessibility of this innovative treatment remain crucial considerations. Insurers face the challenge of providing coverage for a therapy that holds immense promise but comes with a substantial price tag. However, if approved, this treatment could mark a turning point not only for CRISPR technology but also for patients battling severe sickle cell disease. It offers hope, not just for a better quality of life but for a life free from the shackles of this debilitating genetic disorder.

As we eagerly await the FDA’s decision, anticipated by December 8th, the medical community and patients alike hold their breath, hoping for a positive outcome. If approved, this therapy will not only signify a triumph for science but also a victory for those who have long awaited a cure. The journey of medical innovation is often arduous, but the strides made in the realm of gene-editing stand as a testament to human ingenuity, resilience, and the unwavering pursuit of a healthier, disease-free world.

AI can be responsible without government intervention, new research shows

The global race to develop artificial intelligence is the most consequential contest since “the space race” between the United States and the Soviet Union. The development of these tools and this industry will have untold effects on future innovation and our way of life.

The White House will soon unveil its anticipated executive order on AI, which may include a commission to develop an “AI Bill of Rights” or even form a new federal regulatory agency. In this case, the government is playing catch-up with AI innovators and ethicists.

AI in a democratic society does not mean spinning up federal AI agencies staffed by whoever won the most recent election — it means having a wide range of policies and rules made for the people, by the people, and that are responsive to the people.

AI has an almost unlimited potential to change the world. Understandably, this makes many people nervous, but we must resist handing over its future to the government at this early stage. After all, this is the same institution that has not cracked 30% in overall trust to “do the right thing most or all of the time” since 2007. The rules of the road can evolve from the people themselves, from innovators to consumers of AI and its byproducts.

Besides, does anyone really believe a government that is trying to wrap its regulatory mind around the business model and existence of Amazon Prime is prepared to govern artificial intelligence?

For an example of the rigor required to develop rules for AI in a free society, consider the recent researchpublished by Anthropic, an Amazon-backed AI startup known for the Claude generative AI chatbot. Anthropic is developing what’s known as “Constitutional AI” that looks at the question of bias as a matter of transparency. The technology is governed by a published list of moral commitments and ethical considerations.

If a user is puzzled by one of Claude’s outputs or limitations, he or she can look to the AI’s constitution for an explanation. It’s a self-contained experiment in liberalism.

As any American knows, living in a functional constitutional democracy is as clarifying as it is frustrating. You have specific rights and implied rights under American law, and when they’re violated, you can take the matter to court. The rights we have are as frustrating to some as the ones we don’t: the right to keep and bear arms, for example, along with the absence of a clear constitutional right to healthcare.

Anthropic surveyed 1,094 people and broke them up into two response groups based on discernible patterns in their way of thinking about a handful of topics. There were many unifying beliefs about what AI should aim to do.

Most people (90% or more) agree that AI should not say racist or sexist things, AI shouldn’t cause harm to the user or anyone else, and AI should not be threatening or aggressive. There was also broad agreement (60%) that AI should not be programmed as an ordained minister — though with 23% in favor and 15% undecided, that leaves quite the opening in the AI space for someone to develop a fully functional priest chatbot. Just saying.

But even agreement can be deceiving. The yearslong national debate over critical race theorydiversity, equity, and inclusion, and “wokeness” stands as evidence that people don’t really agree on what “racism” means. AI developers such as Anthropic will have to choose or create a definition that encompasses a broad view of “racism” and “sexism.” We also know that the public does not even agree on what constitutes threatening speech.

The single most divisive statement, “the AI shouldn’t be censored at all,” shows how cautious consumers are about AI having any kind of programmed bias or set of prerogatives. With a close to 50/50 split on the question, we’re a long way from when Congress could be trusted to develop guardrails that protect consumer’s speech and access to accurate information — much less so the White House.

Anthropic categorizes the individual responses as the basis for its “public principles” and goes to great lengths to show how public preferences overlap and diverge from its own. The White House and would-be regulators are not showing anywhere near this kind of a commitment to public input.

When you go to the people through elected legislatures, you find out interesting things to inform policy. The public tends to focus on maximized results for AI queries, such as saying a response should be the “most” honest or the “most” balanced. Anthropic tends to value the opposite, asking AI to avoid undesirables by asking for the “least” dishonest response or “least” likely to be interpreted as legal advice.

We all want AI to work for us, not against us. But what America needs to realize is that the natural discomfort with this emerging technology does not necessitate government action. The innovation is unfurling before our very eyes, and there will be natural checks on its evolution from competitors and consumers alike. Rather than rushing to impose a flawed regulatory model at the federal level, we should seek to enforce our existing laws where necessary and allow regulatory competition to follow the innovation rather than attempt to direct it.

Originally published here

The FCC resurrects a net neutrality plan nobody asked for and no one needs

FOR IMMEDIATE RELEASE | October 19, 2023

WASHINGTON, D.C. – Today, Federal Communications Chairwoman Jessica Rosenworcel spoke at the agency’s open meeting about the forthcoming rules to reclassify broadband providers as public utilities under Title II of the Communications Act of 1934, commonly known as “net neutrality.”

This marks a step back for all American Internet users, who have thus far profited from a more innovative broadband marketplace since the repeal of these rules in 2017 by former chair Ajit Pai.

Yaël Ossowski, deputy director of the Consumer Choice Center, reacted to the announcement:

“Resurrecting the idea of Title-II regulation of the Internet, after its successful repeal in 2017, is the idea that nobody needs in 2023. Since then, we’ve seen incredible innovation and investment, as more Internet customers begin using mobile hotspots and satellite Internet, getting more Americans online than ever before. No one is asking for this proposal and no one needs it.

“Regulating ISPs like water utilities or electricity providers is a path toward more government control and oversight of the Internet, plain and simple,” said Ossowski.

“As we’ve seen with the recent Missouri v. Biden court case, today’s major Internet problem isn’t broadband providers blocking certain access or services, but government agencies attempting to strong-arm and jawbone Internet providers and platforms into censoring or removing content they don’t agree with. This is more concerning than any worst-case scenario dreamed up by FCC commissioners.

“Bringing these dead regulations back to life to enforce Depression-era rules on the web will be a losing issue for millions of Americans who enjoy greater Internet access and services than ever before.

“Rather than support Americans’ access to the Internet, it stands to threaten the vast entrepreneurial and tech spaces across our country and will push companies to set up in jurisdictions that promise true Internet freedom rather than state-imposed regulation of content and delivery of Internet services. It would be another failed initiative of so-called “Bidenomics”.

“We implore the FCC to whole an open and honest public engagement process on these proposed net neutrality regulations, and we are certain consumers will have their say against this proposal,” added Ossowski.


The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Ottawa, Washington, Brussels, Geneva, Lima, Brasilia, and other hotspots of regulation and inform and activate consumers to fight for #ConsumerChoice. Learn more at consumerchoicecenter.org.

***Please send media inquiries to yael@consumerchoicecenter.org.***

This Sneaky Bipartisan Bankruptcy Reform Will Sting Tech Consumers

If there’s one theme emerging this year in Washington, D.C., it’s the full-on bipartisan rampage against American tech firms.

In a courthouse just blocks away from the Capitol, Google is defending its search engine against the Justice Department, while down the street the Federal Trade Commission is finalizing its case to break up Amazon. The DOJ is also reportedly probing Elon Musk’s company expenses at Tesla, laying the groundwork for an eventual case against the tech mogul.

Congress’ anger toward technology companies is red-hot and taking shape in the unlikeliest of forms — federal bankruptcy law reform.

Republican Takes on the Bankruptcy Reform

Last week in the Senate Judiciary Committee, a hearing was held on reforms to Chapter 11 bankruptcies, aimed at ending “corporate manipulation” of its statutes.

The discussion highlighted recent examples of companies undergoing multidistrict class-action lawsuits and their strategy of spinning off separate holding companies to more quickly and efficiently adjudicate claims in bankruptcy courts, rather than endure years-long jury trials.

It’s known as a “Texas Two-Step.”

It’s a model that plaintiff attorneys and Democrats generally deplore, a fact repeatedly made clear during the hearing, but one that has proven to render judgments quickly and with a better assessment of whether claims against large companies are legitimate. Most interestingly, comments by Republican senators indicate their party’s intent on using Chapter 11 to target what they perceive as the “harms” of Big Tech.

“In social media, there is no model like this,” stated Sen. Lindsey Graham. “We may not agree on how to resolve this issue, but if you’re harmed by social media, you have nothing. Zero. Zip. There’s where I hope the committee can come together and create rights of actions.”

Sen. Josh Hawley, who recently authored a book titled The Tyranny of Big Tech and has positioned himself as a chief antagonist of Silicon Valley, went one step further.

“If you wanna know why private rights of action are so darn important, and why we need to use them against the big tech companies, this is the reason why,” he said.

Tech Consumers Will Be Harmed

When Republicans invoke a “private right of action,” they’re talking about allowing consumers to individually sue any company for privacy violations or other “harms” yet defined.

While Hawley and Graham allude to a broad social media “harm,” independent researchers have yet to make any definitive case on what that means. Certainly not enough to mount a legal case.

Tech consumers who depend on these products and services could also soon bear the brunt of the regulatory and legal costs we see all too often in health care, banking, and food production, that of upwardly creeping prices and less innovation.

Everything would change for tech users, advertisers, and adjacent industries. Whether these services are free won’t matter once the free-for-all litigation can begin and lawyer-funded TV ads and billboards coax the next class of plaintiffs for attempts at billion-dollar settlements.

With the threat of more lawsuits — legitimate or not — comes higher costs for compliance and adjudication. When the target is a consumer-facing company with thousands of products and millions of buyers, these added costs are passed down to consumers.

At the same time, these cases overfill the docket alongside many real tort claimants who deserve justice, such as survivors of environmental catastrophes and victims of defective products.

Will Republicans Contract Lawsuit Fever?

Massive class-action lawsuits are the favored tool of legal firms because many companies would rather settle than subject themselves to lengthy litigation, which promises large payouts to the firms that organize the class and file the case.

Think of the corporate cases against Starbucks, a multi million-dollar suit over its fruit drinks not having “enough fruit,” or Burger King, with a class-action lawsuit over “false advertising,” alleging that hamburgers in TV ads are larger than when they’re served in the fast-food restaurants.

The U.S. is nominally the most highly litigious country in the world, so these examples should come as no surprise.

If Republicans also contract lawsuit fever, we’ll see a world with an explosion of mass tort class-action lawsuits filed against American technology companies, many of which would be without merit.

This would tie up resources for hundreds of innovative firms that consumers know and love and would place even more inflationary pressures on prices. Not to mention that it would pervert the true purpose of our judicial system — to deliver justice.

American citizens and consumers rely on a fair and virtuous legal system to protect our rights and ways of life. If anything, we should continue to demand that this be upheld.

Yaël Ossowski is a Canadian-American journalist and deputy director of the Consumer Choice Center.

Published in American Spectator (archive link).

Comments on India’s Competition (Amendment) Act, 2023

Dear Competition Commission of India,

In order to follow through on your call for stakeholder groups to provide regulatory comments on the updates to the Competition Act, we want to offer thoughts from a consumer perspective. For reference, the Consumer Choice Center is a globally consumer advocacy group championing policies that are fit for growth, promote tech innovation, and enshrine lifestyle freedom, all the while promoting consumer choice.

In reviewing The Competition (Amendment) Act of 2023, we add the following:

Proposed Section 29A

With the proposed amendment in Section 29A, we would insert the phrase “and on consumer choice” after the phrase “an appreciable adverse effect on competition,” in order to more precisely adhere to a limited competition and antitrust definition that elevates the effect to consumers and prices, rather than “competition”.

Proposed Section 18

With the proposed amendments in Section 18, we would insert “consumer choice” to come before “competition,” again demonstrating the usefulness of consumer choice and pricing comparisons as a more accurate rubric for determining competition.

Overall, we remain positive to the Competition Commission’s updated guidelines on mergers and general antitrust law. As India’s digital economy grows and continues to offer unique goods and services to Indian consumers, we believe all Central Government agencies should also adhere to a competition policy that upholds consumer choice and regulatory barriers that may be impeding that, and perhaps leading to higher prices or reduced competition. Impact to consumers is key.

Defining the adequate level of competition is an impossible task for any government agency or department, and should best be left to consumers who will better determine market size and performance. Where regulatory barriers exist, or where fraud and deception exist, should be a more targeted focus for competition regulators than only concerns for competition — domestic or otherwise.

LINK TO PDF

Scroll to top
en_USEN