fbpx

Tech Regulation

Lawsuit Against Google’s Algorithms Could End the Internet As We Know It

A lawsuit against Google seeks to hold tech giants and online media platforms liable for their algorithms’ recommendations of third-party content in the name of combating terrorism. A victory against Google wouldn’t make us safer, but it could drastically undermine the very functioning of the internet itself.

The Supreme Court case is Gonzalez v. Google. The Gonzalez family is related to Nohemi Gonzalez, an American tragically killed in a terror attack by ISIS. They are suing Google, YouTube’s parent company, for not doing enough to block ISIS from using its website to host recruitment videos while recommending such content to users via automated algorithms. They rely on antiterrorism laws allowing damages to be claimed from “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”

If this seems like a stretch, that’s because it is. It’s unclear whether videos hosted on YouTube directly led to any terror attack or whether any other influences were primarily responsible for radicalizing the perpetrators. Google already has policies against terrorist content and employs a moderation team to identify and remove it, although the process isn’t always immediate. Automated recommendations typically work by suggesting content similar to what users have viewed since it’s most likely to be interesting and relevant to them on a website that hosts millions of videos. 

Platforms are also shielded from liability for what their users post and are even permitted to engage in good-faith moderation, curation and filtration of third-party content without being branded publishers of it. This is thanks to Section 230, the law that has allowed for the rapid expansion of a free and open internet where millions of people a second can express themselves and interact in real time without tech giants having to monitor and vet everything they say. A lawsuit victory against Google will narrow the scope of Section 230 and the functionality of algorithms while forcing platforms to censor or police more.

Section 230 ensures that Google won’t be held liable for merely hosting user-submitted terrorist propaganda before it was identified and taken down. However, the proposition that these protections extend to algorithms that recommend terrorist content remains untested in court. But there’s no reason why they shouldn’t. The sheer volume of content hosted on platforms like YouTube means that automated algorithms for sorting, ranking and highlighting content in ways helpful to users are essential to the platforms’ functionality. They’re as important to user experience as hosting the content itself. 

If platforms are held liable for their algorithms’ recommendations, they’d effectively be liable for third-party content all the time and may need to stop using algorithmic recommendations altogether to avoid litigation. This would mean an inferior consumer experience that makes it harder for us to find information and content relevant to us as individuals.

It would also mean more “shadow-banning” and censorship of controversial content, especially when it comes to human rights activists in countries with abusive governments, peaceful albeit fiery preachers of all faiths, or violent filmmakers whose videos have nothing to do with terrorism. Since it’s impossible to vet each submitted video for terrorism links even with a large moderation staff, tooling algorithms to block content that could merely be terrorist propaganda may become necessary. 

Conservative free speech advocates who oppose big-tech censorship should be worried. When YouTube cracked down on violent content in 2007, it led to activists exposing human rights abuse by Middle Eastern governments being de-platformed. Things will get even worse if platforms are pressured to take things further.

Holding platforms liable like this is unnecessary, even if taking down more extremist content would reduce radicalization. Laws like the Digital Millennium Copyright Act provide a notice-and-takedown process for specific illegal content, such as copyright infringement. This approach is limited to user-submitted content already identified as illegal and would reduce pressure on platforms to remove more content in general.

Combating terrorism and holding big tech accountable for genuine wrongdoing shouldn’t involve precedents or radical laws that make the internet less free and useful for us all.

Originally published here

The Great Danger of CBDCs

Kaleidoscopic Banknotes Collage

There have been numerous announcements of central banks starting to explore the idea of introducing Central Bank Digital Currencies (CBDC).

From e-naira, a CBDC issued by the central bank of Nigeria, to the digital yuan in China to the European central bank exploring the idea of the digital euro. In fact, according to the Bank For International Settlements research, 90% out of 81 central banks surveyed have been in some shape or form investigating the idea of introducing a central bank digital currency.

According to the same survey, an increasing number of countries are adjusting the legal authority of central banks giving them provisions that allow for a launch of digital currencies.

These central banks argue that CBDCs will help with financial inclusion by providing more access to financial services for underbanked and unbanked, they would lead to a significant reduction in fraud and money laundering, and they would improve efficiency and ultimately allow for a better and more efficient monetary policy through more control over the money supply.

CBDCs are often thought of in terms of the government’s response to crypto, the way that central banks are trying to get with the times and digitize money. However, except for utilizing similar technologies, they are fundamentally different from Bitcoin and many other cryptocurrencies.

The most significant difference between CBDCs and Bitcoin lies in the level of centralization and control. While Bitcoin is a fully decentralized currency operating on a decentralized ledger that not one person or organization can control, CBDCs are issued and fully controlled by the central bank that controls its supply, issuances, and use.

Bitcoin was created as a decentralized alternative to traditional fiat currencies and as a response to the monetary policies of central banks creating uncertainty and being responsible for the devaluation of money with ripple effects throughout the economy. CBDCs would equip governments with tools providing fast and easy total control over monetary policy to the extent of targeting businesses, organizations, and individuals. 

The level of control that a government would have over every transaction and the ability to apply transaction censorship over anyone would give leaders a level of control unprecedented in history, a tool that any totalitarian leader from a few decades ago could have only dreamed of. 

One could argue that most money already is digital, an endless collection of 0s and 1s. However, the crucial distinction is that no single database can track and oversee every transaction that exists. There are a number of laws and regulations in place that allow law enforcement to request access to records of interest where courts are required to give approval for such actions.

Forgoing these checks and balances currently in place and allowing one-click access to accounts of citizens would give not only an unprecedented power in terms of privacy violations but also an opportunity to monitor or deactivate undesirable accounts based on any perceived or real violation.

Taking away all of one’s ability to sustain themselves by locking their accounts is equivalent to jailing them. Giving officials the option to freeze or ban certain accounts without due process could seriously damage the principles of rule of law on which our society rests.

The potential for any elected or appointed officials to affect a citizen’s livelihood in such a way could lead to serious consequences, such as endangering the ability of citizens to use their right to free expression in fear of their lives being ruined in a single click. It is not hard to imagine many possible ways that any malicious actor could use this centralized power. Many other unintended consequences could be possible and some could create immense levels of social distrust.

Then there is privacy. Transactions made using CBDCs may be recorded on a public blockchain, making it possible for others to track and analyze financial data. Having citizens using a tool that could fundamentally affect their privacy on an unimaginable scale thus far in human history would be a grand violation of rights to privacy and would, without a doubt, lead to additional problems.

You thought your browsing history could be turned against you? Anyone having access to any monetary transaction you have made would definitely not be fun either and it is easy to imagine dozens of ways that bad actors could exploit access to that kind of information.

Another often overlooked potential consequence of introducing Central Bank Digital Currency is the digital monetary competition. If we see a rise in digital currencies issued by central banks, it is likely that they will enter a race with other country issued currencies as well as private or decentralized ones, such as Bitcoin. Having this sort of competition would potentially open up unknowing citizens to currency fluctuations which cannot be foreseen and create even larger instability with some national currencies. The ways this could affect purchasing power and lead to potential civil unrest is evident.

This is only a few ways that adoption of Central Bank Digital Currencies could affect life as we know it. It is easy to see how an extremely centralized, highly controlled and surveilled currency would be an end of many of the freedoms that our societies enjoy and shows why in contrast, Bitcoin, a highly decentralized, secure and censorship resistant currency is immensely important and represents one of the most potent tools humanity has today.

Aleksandar Kokotović is the crypto fellow at the Consumer Choice Center.

An Illustration of Why Section 230 Should Be Preserved, Not Scrapped

Removing Section 230 would stifle engagement and interaction in the online realm.

Section 230 of the 1996 Communications Decency Act is currently being called into question by lawmakers, and this raises red flags for both producers and consumers within the online realm.

Section 230 states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

If the granting of this protection were to be removed, any online sharing site (ranging from food blogs to Facebook pages to search engines to simulation games) could be held accountable for the online activity of users and affiliates.

For example, the current questionable image from Jaimie Lee Curtis’ account would hold Instagram as liable for the contested offensive or artistic picture featured in the post.

And while Big Tech may be able to bulk up capacity to counter claims for such cases, social media startups and casual content creators better beware.

It is not a matter of whether online activity should be moderated, but rather who does the moderating. If Section 230 protections were to be removed, this would discourage the creation of new social networking sites and create a mandate for an online surveillance state.

So, since some political officials believe online service providers should be held liable for suggestions, search results, and social feeds, and given that hearings on the Hill have exposed an ineptitude for most things tech related, here is an illustration of how Section 230 plays out in an offline scenario.

Read the full text here

Betiltják a TikTokot Európában?

A Consumer Choice Center kedden az EU döntéshozóinak címzett állásfoglalásában azt írja, hogy itt az ideje, hogy az EU is fokozza a TikTokkal kapcsolatos intézkedéseit, „mielőtt ez túl késő volna”. 

Mintha az elmúlt napokban kezdene elfogyni az EU-ban a levegő a TikTokkörül.

Január 19-én az Európai Bizottság belső piacért felelős európai biztosa, Thierry Breton videóhíváson keresztül tárgyalt a kínai minivideó-megosztó platform vezérigazgatójával, Sou Ce Csuval. A biztos a megbeszélés kapcsán a TikTok elsősorban tizenéves közönségére utalva úgy fogalmazott:

„A lehető leghamarabb”

Breton hozzátette, hogy az európai fiatalok millióit elérő platformként a TikToknak teljes mértékben meg kell felelnie az uniós jogszabályoknak, különösen a digitális szolgáltatásokról szóló EU-s jogszabálynak. Az európai uniós nyelvezettől eltérően szokatlanul élesen hozzátette, hogy megkérte a TikTok vezérigazgatóját, hogy a „lehető leghamarabb” mutassa be „nemcsak az erőfeszítéseket, hanem azok eredményeit is”.

A január 19-i tárgyalást megelőzte január 10-én egy személyes brüsszeli találkozó, amikor Sou Ce Csu több biztossal is eszmecserét folytatott a platform európai jövőjéről. Vera Jourova, az Európai Bizottság alelnöke ekkor közölte: nem szabad, hogy kétséges legyen, hogy az európai felhasználók adatai biztonságban vannak és nincsenek kiszolgáltatva harmadik országbeli hatóságok illegális hozzáférésének. Emmanuel Macron francia elnök szerint a kínai platform „megtévesztően ártatlannak tűnik”, de függőséget okoz és orosz dezinformációt terjeszt.

Read the full text here

The best answer to TikTok is a forced divestiture 

As consumer advocates, we pride ourselves as standing for policies that promote policies fit for growth, lifestyle freedom, and tech innovation. 

In usual regulatory circumstances, that means protecting consumers’ platform and tech choices  from the zealous hands of regulators and government officials who would otherwise seek to shred basic Internet protections and freedom of speech, as well as break up innovative tech companies. Think Section 230, government jawboning, and consequences of deplatforming.

As such, the antitrust crusades by select politicians and agency heads in the United States and Europe are of primary concern for consumer choice. We have written extensively about this, and better ways forward. Many of these platforms make mistakes and severe errors on content moderation, often in response to regulatory concerns. But that does not invite trust-busting politicians and regulators to meddle with companies that consumers value.

In the background of each of these legislative battles and proposals, however, there is a special example found in the Chinese-owned firm TikTok, today one of the most popular social apps on the planet. 

RELATED: Forcing TikTok’s divestiture from the CCP is both reasonable and necessary

The Special Case of TikTok

Now owned by Bytedance, TikTok offers a similar user experience to Instagram Reels, Snapchat, or Twitter, but is supercharged by an algorithm that serves up short videos that entice users with constant content that autoloads and scrolls by. Many social phenomena, dances, and memes propagate via TikTok.

In terms of tech innovation and its proprietary algorithm, TikTok is a dime a dozen. There is a reason it is one of the most downloaded apps on mobile devices in virtually every market and language. 

Researchers have already revealed that China’s own domestic version of TikTok, Douyin, restricts content for younger users. Instead of dances and memes, Douyin features science experiments, educational material, and time limits for underage users. TikTok, on the other hand, seems to have a suped-up algorithm that has an ability to better attract, and hook, younger children.

What makes it special for consumer concern beyond the content, however, is its ownership, privacy policies, and  far-too-cozy relationship with the leadership of the Chinese Communist Party, the same party that oversees concentration camps of its Muslim minority and repeatedly quashes human rights across its territories.

It has already been revealed that European users of the TikTok can, and have, had their data accessed by company officials in Beijing. And the same goes for US users. Considering the ownership location and structure, there isn’t much that can be done about this.

Unlike tech companies in liberal democracies, Chinese firms require direct corporate oversight and governance by Chinese Communist Party officials – often military personnel. In the context of a construction company or domestic news publisher, this doesn’t seemingly put consumers in liberal democracies at risk. But a popular tech app downloaded on the phones of hundreds of millions of users? That is a different story.

How best to address TikTok in a way that upholds liberal democratic values

Among liberal democracies, there are a myriad of opinions about how to approach the TikTok beast.

US FCC Commissioner Brendan Carr wants a total ban, much in line with Sen. Josh Hawley’s proposed ban in the U.S. Senate and U.S. Rep. Ken Buck’s similar ban in the House. But there are other ways that would be more in line with liberal democratic values.

One solution we would propose, much in line with the last US administration’s stance, would be a forced divestiture to a U.S.-based entity on national security grounds. This would mean a sale of US assets (or assets in liberal democracies) to an entity based in those countries that would be completely independent of any CCP influence.

In 2019-2020, when President Donald Trump floated this idea, a proposed buyer of TikTok’s U.S. assets would have been Microsoft, and later Oracle. But the deal fell through.

But this solution is not unique.

We have already seen such actions play out with vital companies in the healthcare space, including PatientsLikeMe, which uses sensitive medical data and real-time data to connect patients about their conditions and proposed treatments. 

When the firm was flooded with investments from Chinese partners, the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) ruled that a forced divestiture would have to take place. The same has been applied to a Chinese ownership stake in Holu Hou Energy, a U.S.-subsidiary energy storage company.

In vital matters of energy and popular consumer technology controlled by elements of the Chinese Communist Party, a forced divestiture to a company regulated and overseen by regulators in liberal democratic nations seems to be the most prudent measure.

This has not yet been attempted for a wholly-owned foreign entity active in the US, but we can see why the same concerns apply.

An outright ban or restriction of an app would not pass constitutional muster in the US, and would have chilling effects for future innovation that would reverberate beyond consumer technology.

This is a controversial topic, and one that will require nuanced solutions. Whatever the outcome, we hope consumers will be better off, and that liberal democracies can agree on a common solution that continues to uphold our liberties and choices as consumers.

Yaël Ossowski is the deputy director of the Consumer Choice Center.

Infantilizing teens won’t protect them online, but it could threaten tech freedom

It’s for the children, they say.

A new Californian law that promises to protect minors from harms posed by online platforms like Instagram, Youtube and Tiktok. Instead, though, it threatens to increase censorship of controversial and politically sensitive speech, while slamming start-ups with immense costs and compromising the privacy of those it’s meant to protect.

Set to take effect in 2024, the California Age-Appropriate Design Code Act doesn’t specify tangible harms it’s meant to shield minors from. Nor does it empower parents with oversight over what their kids see online. Instead, it will use the threat of exorbitant fines to force big and small firms alike to identify and “mitigate harmful or potentially harmful” speech to minors, while requiring them to tool their algorithms to “prioritize” content that’s in their “best interests” and supports their “well-being.”

The inherently subjective nature of these terms means that companies will be forced to censor content based on what Big Brother or Big Bureaucracy thinks or says is harmful, while promoting content and speech they approve of. Companies also face lawsuits if the attorney general isn’t happy with how they enforce their own moderation standards. This could easily be weaponized by partisan AGs from either party to score political points by signaling the kinds of content they deem to be inappropriate for minors. In this respect, the law could encourage the kind of collusion between tech giants and the government to suppress or promote viewpoints or agendas that violates the first amendment.

While the law’s intention of protecting minors from age-inappropriate content is commendable, it has a critical flaw. It classifies everyone under 18 as a child, even minors who are nearly old enough to vote, get conscripted or serve on juries. This overbroad definition and the threat of billions in fines means that regardless of what politicians or regulators choose to take action on, companies are still likely to err on the side of censorship when it comes to age-appropriate content. That will likely mean shielding minors from important resources, including research on controversial subjects they might find necessary for school or college projects.

It’s also hard to see how several of the bill’s features, including a ban on enabling auto-play for all videos shown to minors, have anything to do with protecting kids rather than merely undermining the functionality of online entertainment platforms.

But perhaps the Act’s worst features are those around privacy. On one hand, it requires extensive paperwork, including privacy impact assessments and subjective “harm” assessments around new website features and how they could impact minors. This will lead to increased costs for start-ups and delays in bringing new innovations to the market for all users.

The law also requires stricter identity and age verification requirements for minors. This would likely involve collecting and storing sensitive identity information and documentation. With the ever-present threat of cyberattacks that have compromised the servers of even the world’s top tech giants and governments while exposing millions of users’ sensitive personal data to hackers, forcing businesses regardless of size and resources to collect and store such content is a massive privacy risk for those the law claims to protect. These businesses, which differ in data protection standards and capabilities, would become lucrative targets for hackers.

News stories, like Balenciaga’s recent advertising campaigns, apparently showing children with teddy bears in bondage gear, and internal studies linking Instagram use to self-harm and self-image issues for teens, rightly raise concerns about protecting minors online.

But targeted laws around these concrete problems and harms accompanied by better education to empower minors in navigating the online world would be far preferable and beneficial for them than radical legislation that infantilizes teenagers, suppresses speech, compromises privacy, and risks making the internet less functional for everyone.

Originally published here

Is the FTC kneecapping VR before it even gets off the ground?

In a courtroom in San Joe, California today, the US government squared off against a social media company and grilled that company’s CEO about its investments in another technology company, and its general business strategy for the new field of wearable virtual reality.

The app in question, the fitness VR app Within, is poised to be acquired by social media giant Meta (formerly Facebook) for use on its virtual reality headsets and ecosystem.

The deal itself has not yet been finalized, but that hasn’t stopped the nation’s antitrust agency from flexing its muscles in Silicon Valley.

When Meta CEO Mark Zuckerberg took the stand today, lawyers from the Federal Trade Commission aimed to pepper him on the overall business strategy of Meta’s well-known pivot to the metaverse, or virtual reality space, and whether his plans were about…business success?

If the FTC succeeds, it will halt Meta’s purchase of the workout app Within, developed by Los Angeles developers beginning in 2014. While that may put smiles on the faces of some regulators and populist politicians in Washington, D.C., it will do nothing for consumers. And it may even harm the future development of this entire sector.

At last estimate, the entire “metaverse economy” is projected to one day be worth either $800 billion or even trillions by 2030. Meta itself has poured in an ungodly $10 billion in the last year alone, and its own products are still rather limited in terms of user adoption.

The fact that the FTC and other regulators are trying to kneecap virtual reality, before it really even begins, is more startling than anything else.

If the last two decades of economic growth and innovation from Silicon Valley have taught us anything, it is that capital, talent, and business acumen are crucial ingredients for success and user satisfaction, but it isn’t everything. A supportive infrastructure, an investment-friendly climate, and a high demand for developers and skilled employees are also necessary and bring with them exponential benefits.

The companies and firms that have spun off from talent formerly of giants like Google and PayPal — not to speak of Elon Musk, Peter Thiel, and the rest of the PayPay Mafia — have undoubtedly made consumers’ lives better, and helped our economy grow beyond leaps and bounds.

Among those successes, there have been thousands more failures, but those have been at the hands of consumers and users rather than government agencies and federal lawsuits by regulators. And if the media coverage surrounding this case gives any indication, it seems much of this action stems not from antitrust law or precedent, but rather as a kind of payback.

The Associated Press ran a bizarre “analysis” last week, framing the FTC v. Meta/Within case as some kind of retribution for Facebook’s acquisition of Instagram in 2012. Back then, that decision was largely panned by technology journalists and never received a peep from regulators. Since then, it is grown to become one of the most popular apps found in app stores.

Considering Instagram’s success in the last decade, thanks to investments and entrepreneurial prowess by Meta, as some kind of evidence to halt all future mergers and acquisitions of a company that over a billion global consumers is not only wrong, but it begs the question of why the FTC is even involved in the first place.

Consumers benefit when competitors compete, when innovators innovate, and when laws provide regulatory clarity and guidance to protect consumers and police bad actors.

But this case seems more like a hunt for ghosts of Christmas past rather than protecting us from any real harm. And it may do more damage than regulators estimate.

My colleague Satya Marar summed this up in RealClear last month:

Start-ups depend on millions in investment to develop and deploy their products. Investors value these firms based not only on the viability of their products, but on the firm’s potential resale value. Larger firms also often acquire smaller ones to apply their resources, existing expertise and economies of scale to further develop their ideas or to expand them to more users.

Making mergers and acquisitions more expensive, without strong evidence they’ll hurt consumers, makes it tougher for start-ups to attract the capital they need and will only deter innovators from striking out on their own or developing ideas that could improve our lives in an environment where 90% of start-ups eventually fail and 58% expect to be acquired.

The job of the FTC is not to protect consumers from innovations that have not yet happened. That should be the furthered thing for its mission. Rather, it should be focused on consumer welfare, punishing bad actors that take advantage of consumers, break laws, and promote real consumer harm.

Mergers and acquisitions provide value for consumers because they match great ideas and technology with the funding and support to scale them for public benefit. Especially considering the metaverse is so new, it is frankly bewildering that we would be wasting millions in taxpayer dollars to chase down an investment before it even bears fruit — just because a company was too successful last time.

When it comes to our regulatory agencies, we have to ask who they are looking out for when it comes to consumer wants and wishes: the consumers that wish to benefit from future innovations.? Or incumbent players who want to slay the largest dragon in the room.

In this case, it seems the FTC has stretched a bit too far, and consumers may be worse off for it.

Reputation Works Better Than Regulation: Why Demand Should Determine Prices

Dynamic pricing is receiving a lot of attention, given the media storm surrounding the recent sale of Taylor Swift’s concert tickets. Problems with pre-sale pricing and ticket availability for Swift’s “Eras” tour frustrated fans and prompted politicians to cry foul concerning Ticketmaster’s sales strategy. 

Alexandria Ocasio-Cortez was among the the first to assert that Ticketmaster’s supposed monopoly status should be “reigned in,” while other members of Congress such as Amy KlobucharIlhan Omar, Richard BlumenthalDavid Cicilline, and Bill Pascrell also felt it necessary to denounce the dominant status of Ticketmaster. 

This isn’t the first concert that had fans fans demonizing Ticketmaster for its dynamic pricing policy, and this isn’t the first time government officials vowed to intervene in the live entertainment sector. In light of recent events, let’s clarify what dynamic pricing is, why it is a worthwhile strategy for firms to pursue. Politicians should refrain from playing referee, particularly since a firm’s reputation, rather than regulation, plays a greater role in remedying consumer concerns. In fact, in under a month’s time, Ticketmaster has not only apologized to fans for the debacle but has already started the process of making amends by announcing that Verified Fans will have a second chance to purchase tickets for the coveted concert. That response rate is unheard of in the halls of Congress.

Why Demand Levels Should Determine Prices

Dynamic pricing has been around in some form or other for centuries. It is a pricing policy that allows for variations rather than one that is fixed. During the 1950s, however, price adjustments were beginning to be harnessed as a strategic matter in relation to demand conditions. Nobel Laureate William Vikrey proposed that prices should increase for public transport systems at peak periods to lessen congestion. His discovery that a change in price could influence use and consumption patterns by either stimulating or suppressing demand appealed to the interests of the private sector. 

Under a dynamic pricing policy, prices shift according to market conditions, consumer interest, and competitive pressures. Thanks to technological advancements that can assess changes in these factors, companies can better determine demand levels, and pivot their prices, in near-real time.

With dynamic pricing, last-minute tickets to a show can either be a steal if there are unsold seats, or can cost a small fortune if those seats remain in high demand. A Tik Tok influencer demonstrated this by spending $10,000 on two tickets to a Harry Styles concert. 

Dynamic pricing happens all around us, and anyone who has rushed to a restaurant to take advantage of happy hour deals knows all too well how crucial it is that their server input the order before the hour is up. Those who prefer to dine late must forgo the happy-hour price perks. This illustrates an important benefit of dynamic pricing: it furthers opportunities for price discrimination. Despite the negative connotation, price discrimination can be a strategic move. Different markets are charged different prices for the same offering, as in the classic example of granting student or senior discounts for movie tickets when other ticket buyers (seeing the same film in the same theater at the same time) pay full-price. 

Why Both Companies and Customers Leverage Dynamic Pricing

Another important aspect of a flexible pricing approach is that it can create opportunities for cross-subsidization within a firm. Charging a higher price to a market who is willing and able to pay for it allows a firm to offer the product at a lower price to a market with limited purchasing power. Price differentials and price adjustments should be harnessed in a dynamic and interconnected marketplace, and indeed it is common practice.

Price adjustments occur not only across the globe, but also across the street. Retailers like Target will adjust their in-store and online prices in relation to local economic factors, and its savviest shoppers know to adjust their preferred store zip codes and clear their caches to take advantage of price match policies when they are in their favor. Just as technology has enabled firms to track trends and pivot prices, it has also enabled consumers to compare prices in real time, submit requests for returns, and vocalize concerns.

Prices can rise or fall under a dynamic pricing policy, and such an approach works best if the perception of consumer surplus is kept intact, meaning consumers believe they are receiving something of greater value as compared to the price. 

Why The Consumer Remains King In A Free Market

When done well, dynamic pricing adapts to consumers; when done poorly it is seen as taking advantage of them. Yet, it is important to bear in mind that the consumer is never truly captive. If a price is too high, because demand is too great or supply is too scarce, consumers are not forced to buy. For this reason, businesses should take care regarding consumer interests, and charge what they can when they can. 

Ticketmaster has the right to charge what it wants, as it has assumed the rights to the seats at the venue where Taylor Swift performs. And Swifties have the right to refuse purchasing those seats if the show is not worth it to them. Moreover, Taylor Swift has the right to establish her own ticket distribution system if she is unhappy with Ticketmaster’s functionality as an intermediary between her shows and her fanbase.

Over 14 million users flocked to Ticketmaster’s website to make a purchase during the pre-sale release and, according to Ticketmaster, to meet that level of demand “Taylor would need to perform over 900 stadium shows (almost 20x the number of shows she is doing)…that’s a stadium show every single night for the next 2.5 years.” 

It seems it is not Ticketmaster pushing up prices, but rather fans’ demand. 

As consumers, we must remember that in a market-based system, consumers determine what is of value, what is demanded, and what is consumed. To maintain such authority, we would be wise to use our wallets, rather than Washington cronies, to curtail costs.

Originally published here

Biden Administration’s abandonment of Section 230 undermines tech innovation that will harm and disadvantage consumers

Washington, D.C. – Yesterday, lawyers from the Biden Administration filed an amicus brief in a Supreme Court case that will undermine future American tech innovation and inevitably harm and disadvantage online consumers.

In Gonzalez v. Google, the Supreme Court is asked to decide whether YouTube can be held liable for content on its platform, and more specifically its algorithms. The argument brought by plaintiffs is that the algorithm that recommends content based on user preference is not covered by Section 230 of the Communications and Decency Act, and other legislation, and that Google (YouTube’s parent company) can be held liable.

Such a ruling would have a sweeping impact on Internet freedom of speech and tech innovation based here in the U.S.

Yaël Ossowski, deputy director of the consumer advocacy group Consumer Choice Center, responds:

“In a global race to defend freedom and innovation online, it’s beyond disappointing to see the Biden Administration take a position that undermines Section 230, American digital entrepreneurship, and freedom of speech online,” said Ossowski.

“China and the EU are promoting and subsidizing their tech companies and future start-ups massively while our own officials are trying to kneecap them, whether by antitrust litigation by the Federal Trade Commission, Senate bills to break up tech firms, or general hostility to the growth and innovation that Section 230 has afforded to the benefit of consumers,” he said.

“The Biden Administration’s abandonment of Section 230 is concerning and puts much at risk for consumers online.

“The ability of digital entrepreneurs to offer unique and tailored services to consumers who enjoy them would be severely constrained if a Supreme Court ruling upends our modern understanding of the legal system’s protection of platforms online. Added to that, it threatens free speech on the Internet if platforms have an undue obligation to perform content moderation so as to avoid any and all legal liabilities posed by user-generated content.

“For the sake of consumers and American innovation, we hope that an eventual ruling protects the core of our freedom of speech and association online, and protects citizens’ choices to use the services they want. Thus far, the Biden Administration’s views leave us concerned that this is in peril,” he concluded.

Learn more about the Consumer Choice Center’s campaigns for smart policies on tech innovation.

Our Well-Timed Warning on FTX, Bankman-Fried and Future Cryptocurrency Regulations

This letter was sent to Senators, Congressmen of relevant committees, and regulators in the Consumer Financial Protection Bureau, Securities and Exchange Commission, and Commodity Futures Trading Commission in the aftermath of the FTX collapse. The previous letter can be viewed here.

Referring to the previous letter we sent to lawmakers and regulators on October 26, 2022, warning of the influence and inherent financial risks posed by then FTX CEO Sam Bankman-Fried and his related companies, here we offer our thoughts on what you should consider for future regulation on digital assets, cryptocurrencies, and the platforms that use them.

As you will have read by now, the alleged criminal actions of Mr. Bankman-Fried and his affiliated companies (FTX International, FTX Europe, Alameda Research, etc.), have led to several bankruptcy filings, will likely lead to expensive lawsuits, and, without a doubt, will invite investigations and questions from your colleagues and committees in Congress. All of these are necessary and prudent.

The halting of withdrawals for billions of dollars of customer funds, the intermingling of company and customer assets, the collateralization of new crypto tokens backed by nothing, and the unsustainable leverage conspired to create one of the most calamitous events in recent financial history. It is a stain on the reputation of creative entrepreneurs and builders providing value in the cryptocurrency space. This is made all the more troubling by the influence of this company and its leaders in our nation’s capital.

The significant influence of Mr. Bankman-Fried and his companies among Congressional members and staff, donations to political campaigns, and the close relationship with regulators present a damning case of what happens when politically connected firms aim to control and shape legislation without input from consumers and citizens.

While decision-makers were eager to meet with Mr. Bankman-Fried and mirror his biased suggestions on cryptocurrency policy in legislation and enforcement actions, consumer groups like ours sounded the alarm about the conflicts of interest detrimental to sound and principled policy for the millions of Americans who use and invest in cryptocurrencies like Bitcoin.

The Consumer Choice Center began writing publicly about the conflicts of interest and risky financial dealings of these companies and Mr. Bankman-Fried in September 2022, and how they would pose a considerable risk both to the legitimate cryptocurrency industry and to the savings and investments of millions of consumers. We remain steadfast in our conviction.

That said, as consumer advocates, we remain optimistic about the promises of Bitcoin, its cryptocurrency offspring, and the innovative blockchains, decentralized technologies, and crypto services that have evolved around them.

Users of decentralized technologies, however, do not need an industry approach to regulation. Regulations exist to set the rules of the game, not to chart the leaders of the game. This previous approach gave cover to FTX and its affiliated companies and has led to the disaster we see today.

The main caution we invoke, therefore, is that many proposed regulations aim to cement existing industry players and lockout innovative upstarts, while at the same time requiring the same restrictive rules that caused many people to explore cryptocurrencies in the first place.

As we have stated, if rules on crypto and its customers help solidify the financial portfolios, positions, and stock prices of only a select few companies, this will drive innovation away from our shores.

The bad actions of this particular company, while shocking and injurious to many, reflect the mistakes and alleged crimes of those involved. They do not, in any certain terms, condemn the wonderful possibilities of a crypto future nor the millions of consumers who responsibly use these technologies.

The frauds allegedly perpetrated are not too far removed from those of regulated financial firms that have deservedly reaped the consequences of misbehavior, either by the market or law enforcement. That the end product was cryptocurrencies instead of credit default swaps or mortgages makes no difference.

Fraud is fraud and remains illegal no matter what product a company is selling.

This is a stark contrast to the system of fractional-reserve banking that now underlies much of the American financial system and creates the incentives of malfeasance aided by loose monetary policy.

We should not mistake the ills of the current system for those of cryptographically secure digital assets.

With that in mind, rather than the approaches of several self-interested industry leaders, consumers deserve regulation on cryptocurrencies and digital firms that enforce existing rules on fraud (known as “rug pulls”), remain technologically neutral, offer reasonable and minimal taxation, and provide legal transparency. Punishing fraud and abuse, insider trading, and self-dealing should remain the focus.

As consumer advocates, we promote the principle of “self-custody” for crypto consumers, holding private keys to digital assets. This is a cryptographically secure method of controlling cryptocurrencies as originally intended, and one that should be an industry standard. This is the strongest method by which exchanges, brokerages, and those who regulate them can protect consumers. 

The aim of cryptographic digital assets and decentralized digital cash, since the founding of Bitcoin in 2008 by Satoshi Nakamoto, has centered on creating permissionless, peer-to-peer transactions offering a final settlement in a decentralized manner. That should be the guiding principle rather than temporary self-interest.

The whims of a select few industry players, however successful they may be, cannot be the guiding light for the future of decentralized digital money, as the saga of FTX has proven.

The Consumer Choice Center created a policy primer on Principles for Smart Cryptocurrency Regulations in September 2021 to highlight these concerns and we hope you will apply them.

We remain at your disposal for any further exploration of how best to craft rules, guidance, and regulation on the future of cryptocurrencies in our country, so that all society may benefit.

Sincerely yours,

Yaël Ossowski

Deputy Director

Consumer Choice Center

Aleksandar Kokotovic

Crypto Fellow

Consumer Choice Center

Scroll to top
en_USEN