fbpx

Author: Satya Marar

Lawsuit Against Google’s Algorithms Could End the Internet As We Know It

A lawsuit against Google seeks to hold tech giants and online media platforms liable for their algorithms’ recommendations of third-party content in the name of combating terrorism. A victory against Google wouldn’t make us safer, but it could drastically undermine the very functioning of the internet itself.

The Supreme Court case is Gonzalez v. Google. The Gonzalez family is related to Nohemi Gonzalez, an American tragically killed in a terror attack by ISIS. They are suing Google, YouTube’s parent company, for not doing enough to block ISIS from using its website to host recruitment videos while recommending such content to users via automated algorithms. They rely on antiterrorism laws allowing damages to be claimed from “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”

If this seems like a stretch, that’s because it is. It’s unclear whether videos hosted on YouTube directly led to any terror attack or whether any other influences were primarily responsible for radicalizing the perpetrators. Google already has policies against terrorist content and employs a moderation team to identify and remove it, although the process isn’t always immediate. Automated recommendations typically work by suggesting content similar to what users have viewed since it’s most likely to be interesting and relevant to them on a website that hosts millions of videos. 

Platforms are also shielded from liability for what their users post and are even permitted to engage in good-faith moderation, curation and filtration of third-party content without being branded publishers of it. This is thanks to Section 230, the law that has allowed for the rapid expansion of a free and open internet where millions of people a second can express themselves and interact in real time without tech giants having to monitor and vet everything they say. A lawsuit victory against Google will narrow the scope of Section 230 and the functionality of algorithms while forcing platforms to censor or police more.

Section 230 ensures that Google won’t be held liable for merely hosting user-submitted terrorist propaganda before it was identified and taken down. However, the proposition that these protections extend to algorithms that recommend terrorist content remains untested in court. But there’s no reason why they shouldn’t. The sheer volume of content hosted on platforms like YouTube means that automated algorithms for sorting, ranking and highlighting content in ways helpful to users are essential to the platforms’ functionality. They’re as important to user experience as hosting the content itself. 

If platforms are held liable for their algorithms’ recommendations, they’d effectively be liable for third-party content all the time and may need to stop using algorithmic recommendations altogether to avoid litigation. This would mean an inferior consumer experience that makes it harder for us to find information and content relevant to us as individuals.

It would also mean more “shadow-banning” and censorship of controversial content, especially when it comes to human rights activists in countries with abusive governments, peaceful albeit fiery preachers of all faiths, or violent filmmakers whose videos have nothing to do with terrorism. Since it’s impossible to vet each submitted video for terrorism links even with a large moderation staff, tooling algorithms to block content that could merely be terrorist propaganda may become necessary. 

Conservative free speech advocates who oppose big-tech censorship should be worried. When YouTube cracked down on violent content in 2007, it led to activists exposing human rights abuse by Middle Eastern governments being de-platformed. Things will get even worse if platforms are pressured to take things further.

Holding platforms liable like this is unnecessary, even if taking down more extremist content would reduce radicalization. Laws like the Digital Millennium Copyright Act provide a notice-and-takedown process for specific illegal content, such as copyright infringement. This approach is limited to user-submitted content already identified as illegal and would reduce pressure on platforms to remove more content in general.

Combating terrorism and holding big tech accountable for genuine wrongdoing shouldn’t involve precedents or radical laws that make the internet less free and useful for us all.

Originally published here

Infantilizing teens won’t protect them online, but it could threaten tech freedom

It’s for the children, they say.

A new Californian law that promises to protect minors from harms posed by online platforms like Instagram, Youtube and Tiktok. Instead, though, it threatens to increase censorship of controversial and politically sensitive speech, while slamming start-ups with immense costs and compromising the privacy of those it’s meant to protect.

Set to take effect in 2024, the California Age-Appropriate Design Code Act doesn’t specify tangible harms it’s meant to shield minors from. Nor does it empower parents with oversight over what their kids see online. Instead, it will use the threat of exorbitant fines to force big and small firms alike to identify and “mitigate harmful or potentially harmful” speech to minors, while requiring them to tool their algorithms to “prioritize” content that’s in their “best interests” and supports their “well-being.”

The inherently subjective nature of these terms means that companies will be forced to censor content based on what Big Brother or Big Bureaucracy thinks or says is harmful, while promoting content and speech they approve of. Companies also face lawsuits if the attorney general isn’t happy with how they enforce their own moderation standards. This could easily be weaponized by partisan AGs from either party to score political points by signaling the kinds of content they deem to be inappropriate for minors. In this respect, the law could encourage the kind of collusion between tech giants and the government to suppress or promote viewpoints or agendas that violates the first amendment.

While the law’s intention of protecting minors from age-inappropriate content is commendable, it has a critical flaw. It classifies everyone under 18 as a child, even minors who are nearly old enough to vote, get conscripted or serve on juries. This overbroad definition and the threat of billions in fines means that regardless of what politicians or regulators choose to take action on, companies are still likely to err on the side of censorship when it comes to age-appropriate content. That will likely mean shielding minors from important resources, including research on controversial subjects they might find necessary for school or college projects.

It’s also hard to see how several of the bill’s features, including a ban on enabling auto-play for all videos shown to minors, have anything to do with protecting kids rather than merely undermining the functionality of online entertainment platforms.

But perhaps the Act’s worst features are those around privacy. On one hand, it requires extensive paperwork, including privacy impact assessments and subjective “harm” assessments around new website features and how they could impact minors. This will lead to increased costs for start-ups and delays in bringing new innovations to the market for all users.

The law also requires stricter identity and age verification requirements for minors. This would likely involve collecting and storing sensitive identity information and documentation. With the ever-present threat of cyberattacks that have compromised the servers of even the world’s top tech giants and governments while exposing millions of users’ sensitive personal data to hackers, forcing businesses regardless of size and resources to collect and store such content is a massive privacy risk for those the law claims to protect. These businesses, which differ in data protection standards and capabilities, would become lucrative targets for hackers.

News stories, like Balenciaga’s recent advertising campaigns, apparently showing children with teddy bears in bondage gear, and internal studies linking Instagram use to self-harm and self-image issues for teens, rightly raise concerns about protecting minors online.

But targeted laws around these concrete problems and harms accompanied by better education to empower minors in navigating the online world would be far preferable and beneficial for them than radical legislation that infantilizes teenagers, suppresses speech, compromises privacy, and risks making the internet less functional for everyone.

Originally published here

An Overzealous FTC Isn’t Good for Consumers or Startups

Last month, Facebook’s parent Meta Platforms asked an American judge to dismiss the Federal Trade Commission (FTC)’s lawsuit attempting to block Meta’s proposed acquisition of virtual content producer Within Unlimited- maker of the Supernatural virtual reality fitness app. The lawsuit makes the tenuous, speculative claim that since VR platform Meta already owns many VR apps, including movement-based ones like Beat Saber that compete for users with Supernatural, a “monopoly” will “tend to be created” and competition and consumers will be worse-off if the deal proceeds. Never mind that Supernatural faces competition from more similar squarely fitness-focused VR apps that Meta doesn’t own, like Liteboxer and FitXR, as well as non-VR fitness apps like those offered by Apple and Peloton.

It’s the latest in the FTC’s many efforts, under current chairperson Lina Khan, to more aggressively contest tech acquisitions on the basis that tech giants have too much power and influence, even where harm to consumers is spurious or non-existent. Although large tech giants like Meta, Google and Amazon may indeed be guilty of wrongdoings that warrant legal sanction, the stifling of legitimate business deals by unelected bureaucrats will only harm consumers and the viability of start-ups by deterring competition and innovation in the cutthroat, investment-intensive tech world.

Since the 1970s, antitrust enforcement has focused on whether a business practice actually hurts consumers, rather than harming their competitors or some other stakeholder. After all, elected officials are capable of passing laws that target concrete harms corporations inflict on workers and the public. And private businesses shouldn’t expect protection from cutthroat competition since it’s a consequence of doing business. Consumers benefit from companies having to deliver new, better or cheaper products to attract and retain customers. So long as a firm doesn’t use its position to harm consumers by restricting output relative to prices, there’s no reason why antitrust regulators like the FTC should stifle its expansion. Especially when that expansion benefits consumers.

This is especially true for tech. Start-ups depend on millions in investment to develop and deploy their products. Investors value these firms based not only on the viability of their products, but on the firm’s potential resale value. Larger firms also often acquire smaller ones to apply their resources, existing expertise and economies of scale to further develop their ideas or to expand them to more users.

Making mergers and acquisitions more expensive, without strong evidence they’ll hurt consumers, makes it tougher for start-ups to attract the capital they need and will only deter innovators from striking out on their own or developing ideas that could improve our lives in an environment where 90% of start-ups eventually fail and 58% expect to be acquired.

It doesn’t matter that the FTC’s merger challenges may fail in court or even before their own internal administrative judges, including recently under chair Khan. The risk and cost of lawsuits themselves deter investment and beneficial deals. Especially given the uncertainty posed by incorporating vague, amorphous concepts like “fairness” into antitrust analysis that could lead to arbitrary decisions inconsistent with the rule of law. As noted by the late Supreme Court Justice Stewart, the only consistency in antitrust cases when there’s no clear guiding principle like the consumer welfare standard is that “the government always wins.”

Conversely, opponents of the “consumer welfare” standard, including Khan, argue that it fails prevent the concentration of economic and political power. However, this prioritizes speculative harm from a firm growing too big over real harm from giving governments and regulators ability to wield power for political ends or of those lobbying them.

Former presidents Johnson and Nixon both used threats of antitrust enforcement to coerce media outlets into favorably covering their governments. And it’s no secret or surprise that the FTC is frequently approached by firms urging it to deploy taxpayer resources towards antitrust suits against their competitors. More recently, Mark Zuckerberg, who has openly asked for politicians to tell him what content to censor, admitted that Facebook suppressed the Hunter Biden laptop story after government agency pressure. Conservatives should be especially conscious about encouraging agencies to target companies on vague or speculative grounds.

The FTC has the resources it needs to go after malicious actors that definitively harm consumers, as evinced by its multimillion-dollar settlement with extramarital affair website Ashley Madison over poor cybersecurity and data privacy practices and consumer deception, and other successful cases including chair Khan’s commendable pursuit of businesses that illegally collect and misuse children’s data. These are a far better use of the agency’s time and taxpayer funding than a zealous approach to blocking acquisitions and other legitimate business practices that could benefit consumers and that the innovative start-up ecosystem depends on.

Originally published here

Consumers Stand to Lose From Swipe Card Regulations

Politicians and a coalition of powerful retail giants are pushing bills intended to limit the fees that businesses pay when a customer buys things with a credit or debit card. 

Bipartisan Senate Amendment 6201 would require cards to allow businesses to route payments through networks unaffiliated with Visa or Mastercard — the nation’s two biggest card issuers and would force issuers to make all payment networks available to retailers for routing transactions, regardless of which one the customer wants.

The amendment’s proponents argue that it will undermine Visa and Mastercard’s hold on the card sector, where they collectively hold 80 percent of the market share while providing some inflation relief to consumers by lowering transaction costs that businesses typically pass on to them. 

But the reality is murkier. The amendment doesn’t mention consumers, and there’s no guarantee we’ll face lower prices at the store or online. Instead, consumers stand to lose from fewer choices, less credit access, less secure transactions, and the evaporation of reward programs and other benefits.

Card interchange fees typically account for just 1 percent to 3 percent of the final price, even when passed on to consumers. Previous restrictions, like the 2010 debit card interchange fee cap, didn’t even lead to cost savings for most businesses. Smaller businesses often saw their costs increase. Only a small number of large retailers experienced lower costs. And 22 percent of retailers increased prices charged to the consumers, while 1 percent lowered prices. 

A lack of significant perceived benefits for most retailers could partly explain why Australia, where financial institutions have allowed merchants to choose lowest cost payment networks for routing customer transactions since 2018, has seen low take-up rates for this functionality.

Moreover, interchange fees help pay for various services, including rewards programs, interest-free periods, and payment guarantees, so merchants don’t have to worry about a customer’s credit history, security protocols, and other banking services. Forcing card issuers to reduce the fees they can levy means cuts to these benefits and programs — reducing consumer choice while deterring fraud protection and cybersecurity innovation

It’s not just the wealthy who rely on these benefits. Eighty-six percent of credit cardholders have active rewards cards, including 77 percent with a household income lower than $50,000.

Australia’s 2003 interchange fee restrictions resulted in fewer services, fewer benefits and higher annual fees. Americans could soon feel similar pain.

Cardholders are also likely to bear at least some of the estimated $5 billion cost of the technical infrastructure needed for issuers to comply with the amendment. Banks have also responded to previous interchange fee restrictions by hiking the feesthat Americans are charged for opening and using checking accounts, with fewer banks offering no-fee accounts.

Lower-income Americans could be harshly affected by reduced access to credit. Credit unions that serve underbanked communities are already expressing concerns about the policy. Credit unions and community-owned banks also rely more on interchange fees to stay afloat than larger banks, which depend more on interest rates. Lower interchange fees could force these institutions to raise interest rates on credit cards, even though they serve a higher proportion of cardholders who don’t carry a balance or don’t pay penalty fees.

Congress can provide long-term inflation and cost-of-living relief by repealing costly, counterproductive regulations that benefit moneyed special interests at ordinary Americans’ expense. 

This makes more sense than a misguided payment system regulation that will lower choice, benefits and payment security for cardholders while putting pressure on banks and credit unions to hike interest rates and fees.

Originally published here

Regulators and Politicians Are Coming for the App Store

New legislation and an antitrust lawsuit threaten Apple’s monopoly over its App Store. The Department of Justice recently joined Fortnite developer Epic Games in appealing the latter’s failed 2020 lawsuit against Apple. Epic alleges that the tech giant’s exorbitant 30 percent commission on in-app transactions, which users are forced to conduct through the App Store, violates competition laws and harms consumers. 

Meanwhile, Congress could soon pass the Open App Markets Act (OAMA), a bipartisan bill that would stop app platforms from monopolizing payment systems for in-app transactions, restrict them from preferencing their own apps over competitors’ in-store, and require them to permit “sideloading” — the installation of unverified third-party apps outside of official app marketplaces.

This could give smartphone users access to more apps while increasing competition between developers. Lower entry barriers into the lucrative iPhone app market of more than 118 million Americans could spur innovation in apps that may not have been viable before. It would also encourage investment in developer start-ups and could lower prices for in-app purchases, including for emerging technologies like NFTs, by allowing developers to circumvent Apple’s commissions through alternative digital payment methods.

But is there more to the story?

Users aren’t likely to abandon their iPhones for competitors over costly in-app fees and a sideloading ban once locked in. Conversely, they may see this as a trade-off for better app vetting and data security and privacy controls that Apple promises. Android phones don’t levy 30 percent commissions on in-app transactions, but Google collects and monetizes user data for targeted advertising to a greater degree with fewer controls. 

Though conversely, analysts note that Apple’s own data collection and monetization also fuels its growing ad business, which is expected to grow to $20 billion/year in revenues by 2025. Sideloading outside the App store certainly threatens this segment of Apple’s business.

As for security, discerning adults can trust themselves in navigating less restrictive app marketplaces or in taking precautions if they sideload unverified apps. But the same can’t be said for vulnerable demographics like children or the elderly.

Though the OAMA permits smartphone operating systems to restrict or remove apps over legitimate security and privacy concerns, this may be difficult to implement regarding sideloading. A 2020 Nokia cybersecurity reportblamed sideloading, which is already possible on Android devices, for 15 to 47 times higher rates of malware infection on those devices relative to iPhones.

In any case, Google and Apple’s alternative business models have resulted in a split smartphone market. Apple holds 59 percent of the American market, while the global market is dominated by Android, whose share is 72.2 percent. Both companies face competition from alternative smartphone manufacturers like Huawei and non-smartphone app marketplaces, including gaming consoles like the Xbox, which are exempt from the OAMA.

In a competitive market where users already choose what they value, is a legislative or court mandate limiting companies’ abilities to tailor platforms to their user base necessary or desirable? The ability to monetize the app marketplace funds capital-intensive investment in platform and app ecosystem development. Stymying this ability could harm consumers by discouraging innovation and competition between platforms.

And if Target or Walmart’s ability to “self-preference” by placing home brand products in prime locations relative to competing alternatives is an accepted business practice that isn’t seen as “anti-competitive,” then how is self-preferencing on digital platforms different? Consumers already discern between brands and often choose alternatives for reasons other than cost or product placement — whether online or at brick-and-mortar stores. Placing limitations on self-preferencing may result in stores or platforms levying higher prices from consumers elsewhere or offering fewer choices.

The OAMA is likely to yield greater choices in apps for Apple customers and greater opportunities for developers. But there could still be some adverse long-term consequences. At the very least, provisions that restrict self-preferencing should be reconsidered as they won’t meaningfully increase choices consumers already face.

Originally published here

Scroll to top
en_USEN