Technology

Sen. Rand Paul Comes to the Defense of Consumer’s Free Speech Online 

The consumer choice case for U.S. SEN RAND PAUL’s Standing to Challenge Government Censorship Act

“Kids Online Safety” Bills Threaten Consumer Choice and Free Speech

KOSA is a Trojan Horse for online censorship by both parties who are equally frustrated with social media for political reasons.

Biden’s plan on “March-In Rights” will harm American innovation for years to come

China is catching up to the US with their 22% share of global R&D, and Beijing’s rate of growth is almost double that of the US. That means the United States’ leadership in R&D is in jeopardy. This won’t help

🚩EU Council Members Should Vote Down Chat Control to Protect Encryption

In the wake of last week’s EU parliamentary elections, the European Council is wrapping up negotiations in the final proposals of this mandate. Chief among these is a proposed regulation that would mandate the scanning of digital communications to “prevent and combat child sexual abuse”.

While its name is uncontroversial, the devil is in the details. In short, this proposal would end the wide adoption of end-to-end encrypted services that millions have come to enjoy and rely on, what critics call “chat control”.

In the various debates throughout the last week, EU Council members have discussed and debated the various technical features and applicability of this law. The latest inter-institutional file tracking the progress of the regulation demonstrates that many member states have serious concerns about what this law would usher in.

According to leaked documents, a final vote could happen as soon as this Wednesday. The German Pirate Party has provided additional information about how citizens can weigh in on this proposal.

If this regulation is enacted in its current form, newfound powers would grant police the ability to force encryption messaging providers to scan and moderate content in real-time to avoid liability from prosecution.

This means email services, messaging apps, VPNs, company databases, file uploads on secure servers, and much more will be be required to detect and report any image, link, or material related to sexual exploitation, or general crime.

While this may seem like a reasonable political demand, the wide adoption of encryption protocols and their technical function means there would no longer be any secure communications between European citizens.

In addition, there is no guarantee that this newfound ability would not be abused by certain authorities to punish citizens who are otherwise practicing their free speech or using encrypted services to protect their information.

As has been pointed out by Meredith Whittaker, president of the messaging app Signal, there is no technical nor feasible way to comply with this regulation without breaking encryption altogether, rendering the entire point of encryption moot.

This applies to financial information, company intellectual property, family chat groups, and online browser history. In fact, much of the modern Internet relies on encryption to securely and privately transmit data without getting into the hands of hackers and bad actors.

Rather than tasking police agencies in members states with using lawful court orders and warrants to seek out information on matters related to crime, the European Commission would like to technologically implement backdoor systems for encryption. For European consumers who appreciate and benefit from tech innovation, this should be a non-starter.

As I mentioned in my article on EU Tech Loop, ordinary consumer products have implemented encryption protocols to safeguard their users and their information. This has proven to be a marvel of innovation, and has unlocked new capabilities in digital services.

Encryption is not a tool sought by criminals, abusers, and bad actors, but rather a pinnacle part of the modern digital economy, used by hundreds of millions of customers, citizens, and workers to protect their data and secure their communication.

When the various member state leaders meet at the European Council this week and throughout the summer, we hope they will cast dissenting votes to protest against the proposed Chat Control plans.

European citizens should feel empowered to write their own national members of parliament, as well as Members of European Parliament to voice their opposition.

In seeking to undo this regulation, we should reimagine how democratic societies can effectively prevent and prosecute crime without resorting to mass surveillance.

A US Stablecoin Bill Musters Strength, But It Falls Short of Giving Consumers What They Want

Last month, we finally saw the introduction of a comprehensive US bill to offer a legal pathway for digitally issued stablecoins, cryptocurrencies on open blockchains kept at parity with the US dollar.

The bill was introduced by Sens. Cynthia Lummis (R-WY) and Kirsten Gillibrand (D-NY), named the Lummis-Gillibrand Payment Stablecoin Act.

The bill outlines various measures for recognizing the value of stablecoin networks, as well as the various custodial services that would be required.

The existing market for stablecoins is already rich and highly competitive, with various tokens like Tether, DAI, and USDC launched on various blockchains, from Ethereum to TRON, Polygon and Solana. And all this exists, at least in the United States, without any framework for regulation.

Globally, stablecoins have become a necessary part of safeguarding wealth from rapidly inflating currencies, used widely across the EU, Turkey, Argentina, and across southeast Asia.

In the last 30 days alone, there have been over $2.4 trillion worth of transactions using stablecoins, used by over 26 million people across the globe. There are more than $146 billion in value locked into these tokens, according to the Visa On-Chain Analytics.

Even though Americans are using stablecoins in large numbers, the lack of regulatory certainty and the complications with on and off ramps mean many new stablecoin issuers are wary of offering services in the United States.

As such, the Lummis-Gillibrand Act is an important bill to read through, both for its advantages, but also its very serious shortfalls.

What’s to like:

It’s a starting point.

The uncertainty around stablecoins leaves them much more the payment of choice in decentralized markets and in decentralized finance, keeping them far away from the traditional banking system.

This bill, whatever anyone might say, at least opens the conversations and allows us to understand how future legislation can be crafted. In the waning days of this Congress, it’s uncertain it’ll be passed, but it’s a good shot.

It requires full reserves.

Today’s stablecoins compete based on both their utility and the health of their reserves. That lawmakers see this is important, but seems exceedingly stringent considering the realities of traditional banks. This contrasts with the US fiat banking system, where banks are currently held to a reserve requirement of 0%. If the trade-off for allowing stablecoins is full reserve issuers, I think most consumer would agree this is likely a good thing. Ideally, though, stablecoins would be allowed to compete as payment rails with the same rules as traditional banks. But I think that is likely too far gone.

Custodians will be strictly regulated

As we’d expect, custodians of the reserves of stablecoins would be held to stringent rules. There could be no rug pulls, funny tricks, or fradulent accounting. That’s probably a good thing.

It aims to preserve the US’ unique dual banking system shared between the states and the federal government.

The bill recognizes the unique decentralized nature of the US banking system, empowering states and their institutions to oversee FinTech and banking institutions. The ability for non-depository trust companies to issue stablecoins would be game-changing. However, it does give veto power to the Federal Reserve, which almost makes the effort moot.

What’s not to like:

The Federal Reserve has ultimate veto power.

In a system where private stablecoins would be allowed to exist, we would expect that the US central bank, the Federal Reserve, would do everything to oppose them, as they have. Granting the Fed veto power means likely that no stablecoins would ever be approved.

As Cato Institute scholars Jack Soloweyand Jennifer Schulp argue in Coindesk, the ability for the Fed to block any digitally-based “competitors” would surely be a death knell.

The ceiling on reserves limits the potential for innovation and growth.

The bill outlines a $10 billion cap for state trust companies that want to issue a stablecoin, meaning the total liquidity a stablecoin would be allowed to have would barely rank among the top 150 banks in terms of assets, and significantly kneecap the ability for a stablecoin protocol to innovate, be profitable, and reach large numbers of customers and holders.

These stringent rules would likely mean that only one stablecoin could potentially exist.

The way this bill is written, the only conceivable candidate to be a legal stablecoin, that would have the resources to be issued by a state trust company, would be USDC owned by the firm Circle. This would technically make all other stablecoins used by Americans illegal.

CONCLUSION

It’s obvious that there is high demand for a digital stablecoin based on the US dollar. With such a high volume and number of daily transactions, there are already hundreds of millions of people using these for both savings and spending.

The Lummis-Gillibrand bill makes a good first effort at paving a way for legalizing stablecoins, but unfortunately grants too much veto power to the Fed, caps the innovation and reserves these coins could have, and ultimately means we would be no closer to a system that both recognizes the utility of stablecoins while allowing ordinary people to use them.

Don’t co-parent with congress

I’m always puzzled when I hear other parents say they’re worried about the effects social media might be having on their children. My confusion only grows when I see that the federal government is considering a ban on kids using social media. Are teens acquiring their own mobile devices and paying the bills? Doubtful. It seems someone gave them tacit permission to be on those platforms and the tools to do so. Yet many parents feel like they have no options other than to surrender to their kids’ desires or hate tweet their congressman to get the government to do something about TikTok.

I’m the parent of a teenage daughter who does not have any social media accounts. She has lived her life unplugged.

I remember very clearly when I decided to institute this policy, when she was about 4 years old. We were sitting together in the waiting room of the pediatrician’s office, and as usual, I was on my phone sending emails. She wanted to play with my device, and I declined by saying, “When you’ve learned to be comfortable alone with your thoughts, you can play with my phone.”

Read the full text here

AI Brings Real Hope for Better Healthcare

How many times have we heard from our leaders that their administration will be the one to finally end cancer? Medical innovation in the United States is profound compared to the rest of the world. Still, clearly, there are many more discoveries to be made. 

For breast cancer survivors like myself, the intersection of artificial intelligence and healthcare represents not just hope but tangible progress in the fight against this devastating disease.

Reflecting on my journey and those of so many others affected by all types of cancer, I can’t help but get excited at the potential of AI to revolutionize diagnostics, pharmaceutical innovation and direct patient care.

Diagnostics is the frontline in the battle against cancer, and it holds incredible significance for survivors. According to Harvard’s School of Public Health, using AI to make diagnoses may reduce treatment costs by up to 50 percent and improve health outcomes by 40 percent. Early interventions tend to cost a lot less.

New AI-driven systems are emerging, such as AsymMirai, which simplifies risk prediction by comparing differences between mammograms and can accurately predict breast cancer five years in advance. That’s a game-changer.

Innovation in early detection could spare countless women from unnecessary tests and invasive late-stage procedures, reducing the physical and emotional toll of the diagnostic process. As a survivor who had zero genetic markers and a limited family history of cancer, this breakthrough in early detection is what I needed when my fight with cancer first began. But that’s just the start.

In addition to earlier detection of breast cancer, AI has shown promise in recognizing skin cancer better than experienced dermatologists. A recent study found that AI detected skin cancer more accurately than 58 international dermatologists after examining more than 100,000 images. The importance of accurate and timely cancer detection cannot be understated.

Outsourcing imaging diagnostics to AI could lead to quicker results, lower costs and better outcomes for patients and healthcare consumers.

In addition to diagnostics, the effect of AI on pharmaceutical innovation is equally exciting. While the advancements in pharmaceuticals over the last few decades have been monumental, AI could further expedite the drug discovery process and bring life-saving treatments to patients faster and cheaper.

On average, developing a drug takes more than 10 years and millions of dollars, but AI could streamline that process by better predicting how potential drugs might behave in the body. This would effectively eliminate a lot of slow-moving lab work.

Clinical trials for fully generative AI pharmaceuticals are already happening. Companies started trials last year for a drug called INS018_055, which aims to treat a chronic lung disease known as idiopathic pulmonary fibrosis. The hope is that AI can be applied to create more effective treatment options with fewer adverse side effects at a much quicker pace.

AI can easily analyze the entirety of medical records, scans, labs and other pertinent information to determine quickly which medications or treatments will be most effective. For providers, that means more time spent with patients and less staring at paperwork. Anyone who has worked in an office setting understands the connection between paperwork and staff burnout. AI can help alleviate burnout.

AI developers have taken flak recently from skeptics for their confidence that AI will function like a personal assistant and won’t disempower human beings.

From 2021 to 2022, more than 71,000 physicians left their clinical jobs citing overwhelming administrative burdens associated with patient care. That means tasks such as charting during or after patient visits, calling in prescriptions to pharmacies and then being put on hold, determining billing codes, and other tedious, you might say, soul-crushing work.

Healthcare professionals are in the people business, and AI can empower them to spend more time with people. There’s a lot of reason to be hopeful. With my cancer behind me and my eyes toward the future, I’m really encouraged by what AI could bring to healthcare.

Originally published here

There is Nothing Wrong With Tiered Video Game Pricing

Ubisoft is not the most popular video game publisher these days. Popular search results for “Why is Ubisoft—” tend to point you toward “so bad”, “not working” or “hated”. Regardless, the game maker best known for Assassin’s Creed, Rainbow Six, Far Cry, and Watchdogs notched a huge win when by landing themselves a Star Wars game, Star Wars Outlaws.

Star Wars is a highly coveted IP, and after being mishandled for a decade by Electronic Arts, the rights to Star Wars were opened back up to studios with winning game ideas for Lucasfilm. Ubisoft’s Star Wars Outlaws drops in August, but already the storm clouds of gamer resentment are gathering for Star Wars’ “first-ever” open-world title. Gamers aren’t happy with the announcement of tiered pricing for Outlaws, ranging from $69.99 to $129.99.

In many ways, video game consumers are underserved when it comes to game quality, pricing, and choice, but anger around Star Wars Outlaws is misplaced.

Like with most rage directed at game developers, price is the sticking point for consumers and Star Wars fans waiting to play Outlaws. Ubisoft is offering different editions of the game starting at $69.99 for the base game and a pre-order bonus which includes cosmetic customization for your character’s speeder, $109.99 for that plus 3 days of early access before launch and a “Season Pass” for upcoming DLC expansions to the game. DLC stands for downloadable content and is usually extra quests, new storylines, and experiences.

Priced at $129.99 consumers can get all of this as well as a digital concept art book for the game, and more cosmetic “skins” for your player and vehicles. Lastly, Ubisoft offers Outlaws at $17.99 per month with their Ubisoft+ subscription, which gives subscribers access to all the perks plus 100+ other games from their library.

FandomWire called Outlaws tiers an “atrocious pricing strategy” and The Gamer ran an article saying it was “slimy” and “Star Wars Outlaws has no business charging over a hundred dollars for Ultimate and Gold editions.”

What all of this really means is that the game itself costs $70, and if a gamer deeply desires a pink gun mod, a Han Solo costume for their character, a concept art PDF, and 72 hours of early access to the game then they can purchase those things. This complaint encapsulates the long-running debate concerning pay-to-win tactics over rewards tied to merit.

DLCs are different. Game expansions used to be a la carte by design. A game would launch, it would hopefully be successful, and there would be a clear market demand for more content to be added to the game to keep players engaged. Star Wars Galaxies was an early Star Wars massive multiplayer online game with a remarkable base game, and then a steady stream of DLC expansion packs that opened up new worlds and quests for players. They were usually around $25 a piece and over the course of Galaxies’ run, you might have bought four of these DLCs before the game’s servers shut down in 2011.

Gamers are tired, and it’s understandable. AAA-games from major studios are coming out very slowly, and they are increasingly being released in hasty fashion with work still to be done through digital updates. Ubisoft has slipped into this on a number of occasions. On top of that, you have the trend of in-game transactions and loot crates changing the relationship between gaming and rewards to be tilted in favor of those willing to pay for perks. This is what won Star Wars Battlefront the most downvoted post in Reddit history at the time it was made.

However, it is impossible to ignore that making games is not a cheap business anymore. The resources needed to produce these games are only increasing. The demands of online multiplayer gaming with vast expansive worlds have led to games functioning almost like living documents that receive updates and expansions, requiring continuous revenue to fund the servers bearing the weight of the game’s success. The way publishers have kept prices down to date has been through the very models that gamers are frustrated by.

The world of video games has changed. Gone are the simple days of video game cartridges and disc collections. No studio has perfected a business model that’s both profitable and well-liked by consumers in the digital age of gaming. However, a world without tiered pricing options would only further exacerbate identified problems, leaving gamers worse off in the process.

Originally published here

Consumer Choice Center’s comment on the US government’s proposed KYC regulations for cloud servers

Earlier this year, the US Department of Commerce proposed a sweeping regulatory rule that would force cloud service providers to collect and retain personal information on their users, particularly those based outside the United States.

This regulation, prompted by President Joe Biden’s Executive Orders on the “National Emergency With Respect to Significant Malicious Cyber-Enabled Activities,” would require extensive record keeping and collection of user data for all Infrastructure as a Service (IaaS) providers, firms that offer what is commonly known as virtual machines, web servers, cloud computing and storage, Virtual Private Networks (VPNs), Bitcoin and cryptocurrency nodes, artificial intelligence models, and much more.

The intended targets are services that have customers based abroad, in order to stop malicious foreign actors and hackers, but the rule is written broadly enough that any cloud provider that doesn’t capture this information from its domestic US users would be liable for civil and criminal penalties.

The Consumer Choice Center submitted comments to oppose the Commerce Department’s proposed rule, requesting several changes and modifications to better protect data and consumer privacy.

It is found below:

Overbearing KYC Identity Requirements for Cloud Providers Put Consumers at Risk and Threaten Online Free Speech and Commerce

Dear Under Secretary Alan F. Estevez,

The Consumer Choice Center is an independent, non-partisan consumer advocacy group championing the benefits of freedom of choice, innovation, and abundance in everyday life. 

As an organization representing consumers around the country, we are deeply concerned with the proposed rule to require significant Know Your Customer (KYC) procedures for any and all Infrastructure as a Service (IaaS) providers, as detailed in Docket No. DOC-2021-0007

If these rules as they stand are brought into effect, they will have immediate consequences on consumers and online users who create, use, and deploy all manners of online services, servers, cloud systems, and virtual machines. This includes services that allow users to deploy servers to host their own private document and photo content, Bitcoin and cryptocurrency nodes, artificial intelligence models, Virtual Private Networks (VPNs), and more, in accordance with the terms of service offered by IaaS providers.

While these rules are intended to provide more immediate access to information and data on malicious foreign actors using American cloud infrastructure, they will instead result in significant risk to individual privacy, facilitate the loss or malicious use of data, and grant extraordinary powers to government agencies that are inconsistent with the US Constitution and the Bill of Rights.

We understand the intention is to target foreign hostile actors, but the requirement placed upon US service providers will inevitably require that every American provide this information as well.

The requirement that service providers maintain exhaustive personal and financial information on their customers presents not only a gross violation of privacy, but a significant risk, as the thousands of IaaS providers will be in possession of vast amounts of personal data liable to be hacked or leaked.

What’s more, law enforcement agencies already possess enough tools and authority to follow legal processes to acquire warrants and conduct information.

We believe this proposed rule goes much too far in restricting the ability for Americans to use online services they want to choose, and would limit their ability to use servers and cloud services without significant risk to their privacy and personal data.

In addition, the exhaustive information required by a service that wishes to offer users the ability to run a virtual machine, server, AI model, or more, will necessarily push most Americans to opt out of using domestic services entirely, creating economic consequences not calculated in the proposed rule’s costs of compliance.

We would recommend that this rule be revised entirely, removing the significant privacy risks that KYC collection on IaaS providers would require for domestic users, as well as the duplicative and extralegal authority that would be granted to law enforcement officers, in contravention of constitutional law.

Below, we list the two main areas of concern for US consumers.

KYC Requirements For Foreign Users Applied to Domestic Users

As noted in the Background provided in the Supplementary Information of the rule, these new powers would require service providers to segment users based upon their country of origin:

To address these threats, the President issued E.O. 13984, “Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities,” which provides the Department with authority to require U.S. IaaS providers to verify the identity of foreign users of U.S. IaaS products, to issue standards and procedures that the Department may use to make a finding to exempt IaaS providers from such a requirement, to impose recordkeeping obligations with respect to foreign users of U.S. IaaS products, and to limit certain foreign actors’ access to U.S. IaaS products in appropriate circumstances.

However, in order for IaaS providers to effectively determine the location of a user, they will be required by the force of law – and risk of civil and criminal penalties – to log, categorize, and document a user’s location and accompanying personal information regardless of their location, all in efforts of determining whether a potential user would be considered a “foreign user” or beneficial person.

This will lead to increased collection of information akin to bank accounts and financial transactions, leading to widespread “Know Your Customer” (KYC) requirements which have never been applied at this level to online services.

Beyond congressional approval, we believe this proposed regulation far exceeds the bounds of agency authority, whether from the Department of Commerce or via the mentioned Executive Orders, and would create significant areas of risk for ordinary users and customers location both abroad and within the United States.

In addition, the broad application and definition of a covered service – “any product or service offered to a consumer, including complimentary or “trial” offerings, that provides processing, storage, networks, or other fundamental computing resources, and with which the consumer is able to deploy and run software that is not predefined, including operating systems and applications” – essentially means any cloud service would be within the scope of this regulation.

The Risk of Privacy Breaches

As service providers would be required to maintain a robust Customer Identification Program, as outlined in § 7.302, this would therefore place liability on all cloud providers to collect and retain the full name, address, credit card number, virtual currency numbers, email, telephone numbers, IP addresses, and more on any potential customer of their service.

While we appreciate that private cloud providers and IaaS firms would have the latitude to determine how they structure their Customer Identification Programs, we believe that the requirement to collect this information and store it locally will constitute a high potential for that information to be accessed without authorization, either by hacks, leaks, or other malicious activity. 

Because service providers will be required to catalog this information for years on end, this will inevitably prove to be a high-value target for malicious actors, while providing minimal benefit to the law enforcement agencies that can already legally obtain this information via lawfully executed warrants.

Extraordinary and Duplicative Powers

Law enforcement agencies at the federal, state, and local level already possess the legal tools to subpoena or request data cloud providers or VPN providers with lawfully obtained warrants. 

That IaaS providers would be required to not only retain this information, but also to preemptively “notify” law enforcement without any judicial order or suspicion of a crime, violates the Fourth Amendment and the Due Process Clause as interpreted from the Fifth and Fourteenth Amendments.

Section § 7.306(d) lays out the stipulation for being exempted from the requirements as “voluntary cooperation” with law enforcement agencies, then forcing providers to enable access to “forensic information for investigations of identified malicious cyber-enabled activities”. 

We believe this would be easily abused, as it would provide a legal path for companies to divulge customer information to government authorities beyond what is necessary and lawful, and provide incentives for firms and companies to voluntarily submit information on their customers to government agencies, law enforcement agents, and more.

As written, we believe this proposed rule has been offered in haste, and will likely lead to significant harms and risks to consumers’ data, privacy, and their liberty to engage in free commerce. We would urge this rule to be rewritten with these concerns in mind.

Sincerely yours,

Yaël Ossowski

Deputy Director,

Consumer Choice Center

A new federal privacy bill overdoses on empowering agencies over helping consumers

Late last week, a discussion draft of a new federal privacy bill was uploaded to the cloud server of the US Senate Commerce Committee and made public.

The bill, known as the American Privacy Rights Act, is the latest serious attempt by a bipartisan cohort of congressional legislators to address Americans’ privacy rights online, as well as the obligation of companies, nonprofits, and organizations that cater to them.

There are been numerous attempts at national privacy bills, but this is the first version that seemingly has bipartisan agreement across both the US House and Senate.

At the Consumer Choice Center, we have long championed the idea of a national privacy law, putting forth what we believe are the important principles such a law should have:

  • Champion Innovation
  • Defend Portability
  • Allow Interoperability
  • Embrace Technological Neutrality
  • Avoid patchwork legislation
  • Promote and allow strong encryption

Now that a serious bill has been put forward, authored by Sen. Maria Cantwell (D-WA) and Rep. Cathy McMorris Rogers (R-WA), both chairs of the Commerce Committee in their respective congressional chambers, we’ll address what we consider to be helpful but perhaps also harmful to both consumer choice and future tech innovation if this bill remains in its current form.

Granted, this is a working draft of the bill, and will (hopefully) be updated after feedback. For those who are interested, here’s the latest primer on the bill from the bill authors.

I also provided some additional comments on this bill in a recent Q&A with Reason Magazine, which I’d encourage you to read here if you’re interested.

Off we go.

What’s to like:

A national privacy law is both necessary and welcomed. Not only because it would override the overly stringent state-level privacy laws in places like California and Virginia, but because it would provide uniform policy for consumers and companies that wish to offer them goods and services. 

And also because, as compared to the European Union and other countries, our privacy rights as Americans differ widely depending on the services or sectors we interact with, our IP address, and where we happen to live. And considering the hundreds of privacy policies and terms of service we accept each and everyday, there are vastly different frameworks each of these contracts import.

Here are some positives on the American Privacy Rights Act:

  • Preemption of state privacy laws is a good measure introduced in the bill, particularly when it comes to the strict and overbearing California privacy law, which has become a standard bearer due to California’s huge population and company base.
    • This provides legal stability and regulatory certainty, so that consumers can know their particular rights nationwide, those who interact with these laws can begin to learn and implement them, and there is universality that protects everyone.

  • Data portability is an important principle and could conceivably become an easily enforceable section of privacy legislation. This should be both reasonable and accessible. This would include the exporting of information collected by a particular service or app, as well as any key account details, so that information can be ported over to competing services if consumers want to change things up.
    • Examples: open banking, exportable social profiles, info, etc.
    • Ideally, this information would be exportable using non-proprietary data formats.

  • Transparency on what data is collected and by whom (mostly data brokers) is also a good measure included in the bill. Most tech services and app stores have made this a key feature of what they provide because it’s important to consumers.
    • A registry of data brokers, which would be required, seems inoffensive and would be a good measure of transparency, as would a privacy policy requirement, which most sites already provide and which major app stores require.
    • However, as we’ll mention later, government agencies (particularly law enforcement) are not barred from interacting with data brokers to circumvent warrants, which puts a lot of data of Americans at risk.
      • Sen. Ron Wyden (D-OR) introduced S.2576, the Fourth Amendment Is Not For Sale Act, to deal with this issue and its counterpart in the House successfully passed yesterday.

These three points found throughout the bill do measure up to the principles we’ve outlined in the past. Data portability, avoiding patchwork legislation, and transparency over what data is collected and what isn’t. Most online services already offer this information in privacy policies, and when mediated through cell phone or computer app stores, consumers have direct insight into what is collected.

This is a good starting point, and does demonstrate that the legislators are working in good faith to try to protect Americans’ privacy.

But while those are important, these should also be balanced with consumer access to innovative goods and services, which are cornerstone to our ability to choose the technology we want.

What’s not to like:

While a strong national privacy law is vital, we should also make certain that it is balanced, appropriate, and fair. Consumer protection is an overarching concern, but so should responsible stewardship of data if consumers want it, as well as the ability to access innovation to improve our lives.

These aspects of the bill are more troublesome, as they would likely invite more problems than they would solve.

  • An outright veto on targeted advertising is unworkable and would ultimately work against consumers. It would also basically cut off an important revenue source for most online services that consumers appreciate and use everyday.
    • This algorithmic style of reaching out to willing users implements geo-targeting and personalization, which are key to the consumer experience, and are a willing trade-off for consumers who want to use free or otherwise heavily discounted services.
    • They are also a prime concern for small businesses who rely on targeted ads to reach their customers, whether that be through ads online
    • At the same time, the prohibition on large social media companies offering paid subscription plans to those who don’t want to participate in targeted advertising seems counterintuitive and goes against the spirit of what is trying to be achieved here.
    • A privacy bill is supposed to be about giving consumers ultimate autonomy and decision rights, not outlawing a particular business model.

  • Inventing a right of “opt-out” would necessarily create several tiers of consumers, and would complicate virtually any business’ attempt to collect necessary information on their consumers. It would be a de-facto ban on targeted advertising, as social media services specifically would also be unable to offer “paid” versions to their users, and small businesses would not be able to use social networks to advertise to consumers who they believe would like to buy their goods or use their services.

  • Data minimization is a good principle, but it’s an unworkable legal standard because it would vary so widely depending on any app, nonprofit, or company.
    • Data needs change depending on how firms and organizations evolve, and whatever standard this law would enforce would likely make it more difficult for companies to scale and offer better and more affordable services to consumers in the future.

  • One of the more offensive parts of the bill would be the private right of action, which would be more encompassing than any privacy bill in the world. It would also not allow suits to be settled in arbitration, meaning every lawsuit – no matter its merits – will have to be reviewed by a judge.
    • Private right of action would empower plaintiff attorneys and deter innovation on the part of firms, vastly bloating our justice system.
    • This wouldn’t be positive for consumers, as it would likely raise the cost of goods and services, and would generally add to the overall litigious nature of the US judicial system.
    • At the Consumer Choice Center, we’ve long campaigned on rolling back the excesses of our tort law system and introducing simple legal reforms to better serve those who are legitimately harmed by companies.

  • 🚨The bill exempts government agencies at every level from any privacy obligations. This is a glaring red flag, especially considering the amount of sensitive data that has been routinely leaked, hacked, or made available to the public when it shouldn’t have been. Exempting government agencies from privacy rules is an egregious mistake.
    • If a state’s database of say, gun owners, is leaked (as happened in California). No crime, no foul. The same if a local or city government leaks your income information, Social Security number, healthcare data, or any other type of information. This should be immediately addressed in the bill to introduce parity.

  • Prior restraint for algorithms, which gives the Federal Trade Commission and other agencies veto power on all “computer processes” before they can be used by the public. This means the FTC would need access to all algorithms and AI innovations before launch, which would absolutely have a chilling effect on innovation and restrict entrepreneurial data projects and development of AI models.
    • This would be a huge VETO on American free enterprise and the future of tech innovation in our country, and risk exporting our best and brightest abroad.

  • The FTC would be responsible for the enforcement of these rules, as well as state attorneys generals, but a lot would be litigated in private rights of action (torts, etc.), which would generally favor incumbents who have the resources to comply. So while much of this bill is aimed at trying to reign in “Big Tech,” they paradoxically will likely be the only firms with the significant power to comply.
    • In addition, the Department of Justice and the FTC have built a reputation as anti-tech forces in our federal government. Would this newfound power lead to better goods and services for consumers, or more limited options that would bode well with regulatory authorities for ideological purposes. This is a difficult pill to swallow in either case.

Is there another way forward?

Assuming most of the glaring issues with this bill are fixed – the soft ban on targeted advertising, exempting of government agencies, empowerment of bogus lawsuits by private right of action, the inability to bring cases to arbitration, FTC’s powerful veto power over algorithmic innovation – there are elements that are favorable to those who want a good balance of consumer choice and innovation in our economy while protecting our privacy.

While all these are measures that a national privacy bill could address, there is still much more that we as individuals can do ourselves, using tools that entrepreneurs, developers, and firms have provided to us to be both more private and free. We hope legislators will take these concerns seriously, and amend some of these provisions in the draft bill.

The normalization of end-to-end encryption in messaging, data, and software has been a great counterbalance to the endless series of leaks, hacks, and unnecessary disclosures of private data that have caused objective harm to citizens and customers. We hope this is encouraged and becomes default for digital services, as well as remains protected for use by both firms and consumers.

For another view, the International Center on Law and Economics has an interesting paper on the idea of “choice of law” as the better approach for privacy rights, opening up selection of a particular privacy regime to market choice rather than top-down legislation, similar to private commercial courts in the United Arab Emirates. This would allow states to compete for business by offering the most balanced privacy law, which could spurn a lot of innovative thinking about better ways to approach this.

That said, this is technically how it has been de facto practiced in the country today, and California has won by default owing to its large population. I’m not sure we would be able to trust too many other states to craft balanced but effective privacy laws that wouldn’t create more trouble than it would solve. But I would be happy to be proven wrong.

While this privacy bill is ambitious, and covers a lot of ground that is vital for privacy concerns, there are still many elements that would require sweeping changes before it should be palatable for consumers who desire choice, prefer innovation, and what to ensure that our society remains both free and prosperous.

FCC’s plan to make your Internet a ‘public utility’ will only make it worse

WASHINGTON, D.C. – This week, the Federal Communications Commission revived its proposal to reclassify Internet providers as public utilities under Title II of the Communications Act of 1934, commonly known as “net neutrality.” The FCC vote will take place on April 25.

This marks a step back for all American Internet users, who have thus far profited from a more innovative Internet marketplace since the repeal of these rules in 2017 by former chair Ajit Pai.

Yaël Ossowski, deputy director of the Consumer Choice Center, reacts:

“Resurrecting the idea of Title-II regulation of Internet Service Providers, after its successful repeal in 2017, is the idea that nobody needs, certaintly not in 2024. Since then, we’ve seen incredible innovation and investment, as more Internet customers begin using mobile hotspots and satellite Internet, getting more Americans online than ever before. No one is asking for this proposal and no one needs it.

“Regulating ISPs like water utilities or electricity providers is a path toward more government control and oversight of the Internet, plain and simple, and will only make things worse,” said Ossowski.

“As we’ve seen with the recent court cases before the Supreme Court, today’s major Internet problem isn’t broadband providers blocking certain access or services, but government agencies attempting to strong-arm and jawbone Internet providers and platforms into censoring or removing content they don’t agree with. This is more concerning than any worst-case scenario dreamed up by FCC commissioners.

“Bringing these dead regulations back to life to enforce Depression-era rules on the web will be a losing issue for millions of Americans who enjoy greater Internet access and services than ever before.

“Rather than support Americans’ access to the Internet, it stands to threaten the vast entrepreneurial and tech spaces across our country and will push companies to set up in jurisdictions that promise true Internet freedom rather than state-imposed regulation of content and delivery of Internet services.

“We implore the FCC to whole an open and honest public engagement process on these proposed net neutrality regulations, and we are certain consumers will have their say against this proposal,” added Ossowski.


The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Ottawa, Washington, Brussels, Geneva, Lima, Brasilia, and other hotspots of regulation and inform and activate consumers to fight for #ConsumerChoice. Learn more at consumerchoicecenter.org.

US sues Apple, alleging iPhone monopoly

The Department of Justice and 16 state and district attorneys general on Thursday sued Apple, accusing the tech giant of breaking federal antitrust law by creating an ecosystem that doesn’t allow other companies to compete with the iPhone, smothering innovation in the smartphone market. 

“Apple has consolidated its monopoly power not by making its own products better, but by making other products worse,” U.S. Attorney General Merrick Garland said at a press conference Thursday. “Consumers should not have to pay higher prices because companies break the law.”

The complaint alleged the company maintains a smartphone monopoly by preventing others from building applications that compete with Apple’s staples, like the digital wallet. The tech giant also makes other companies’ technology more difficult to pair with Apple products, as exemplified by the green bubbles that the iPhone shows when texting an Android user.

Garland said Apple was willing to “make the iPhone less secure and less private in order to maintain its monopoly power.”

Responding to the lawsuit, Apple said it threatens “the principles that set Apple products apart in fiercely competitive markets” and it would “set a dangerous precedent, empowering government to take a heavy hand in designing people’s technology.”

Read the full text here

en_USEN

Follow us

WASHINGTON

712 H St NE PMB 94982
Washington, DC 20002

BRUSSELS

Rond Point Schuman 6, Box 5 Brussels, 1040, Belgium

LONDON

Golden Cross House, 8 Duncannon Street
London, WC2N 4JF, UK

KUALA LUMPUR

Block D, Platinum Sentral, Jalan Stesen Sentral 2, Level 3 - 5 Kuala Lumpur, 50470, Malaysia

OTTAWA

718-170 Laurier Ave W Ottawa, ON K1P 5V5

© COPYRIGHT 2025, CONSUMER CHOICE CENTER

Also from the Consumer Choice Center: ConsumerChamps.EU | FreeTrade4us.org