artificial intelligence

Open-source is for everyone, even your adversaries

Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.

The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.

In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.

The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.

While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.

Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?

The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.

With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.

Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?

As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.

At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.

Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.

Open-source matters

When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.

The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.

While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.

Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.

Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.

I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.

Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).

This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.

You know what else is open-source and available for everyone to use, modify, and build upon?

PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.

Open-source code is speech, and it is knowledge.

We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.

Open-source is for your friends, and enemies

In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.

At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).

Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.

We know this can be useful to friends, but what about enemies?

As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.

The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.

A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.

If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.

Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.

Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.

Limiting open-source threatens our own advancement

If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.

If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.

The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.

The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.

But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.

If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.

The EU’s AI ACT will stifle innovation and won’t become a global standard

February 5, 2024 – On February 2, the European Union’s ambassadors green lit the Artificial Intelligence Act (AI Act). Next week, the Internal Market and Civil Liberties committees will decide its fate, while the European Parliament is expected to cast their vote in plenary session either in March or April. 

The European Commission addressed a plethora of criticism on the AI Act’s potential to stifle innovation in the EU by presenting an AI Innovation package for startups and SMEs. It includes EU’s investment in supercomputers, statements on Horizon Europe and Digital Europe programs investing up to €4 billion until 2027, establishment of a new coordination body – AI Office – within the European Commission.

Egle Markeviciute, Head of Digital and Innovation Policies at the Consumer Choice Center, responds:

“Innovation requires not only good science, business and science cooperation, talent, regulatory predictability, access to finance, but one of the most motivating and special elements – room and tolerance for experimentation and risk. The AI Act is likely to stifle the private sector’s ability to innovate by moving their focus to extensive compliance lists and allowing only ‘controlled innovation’ via regulatory sandboxes which allow experimentation in a vacuum for up to 6 months,” said Markeviciute. 

“Controlled innovation produces controlled results – or lack thereof. It seems that instead of leaving regulatory space for innovation, the EU once again focuses on compensating this loss in monetary form. There will never be enough money to compensate for freedom to act and freedom to innovate,” she added.

“The European Union’s AI Act will be considered a success only if it becomes a global standard. So far, it does not seem the world is planning on following in the EU’s footsteps.”

Yaël Ossowski, deputy director of the Consumer Choice Center, adds additional context:

“Despite optimistic belief in the ‘Brussels effect’, the AI Act has not yet resonated with the world. South Korea will focus on the G7 Hiroshima process instead of the AI Act. Singapore, the Philippines, and the United Kingdom have openly expressed concern that imperative AI regulations at this stage can stifle innovation. US President Biden issued an AI Executive Order on the use of AI back in October of 2023, yet the US approach seems to be less restrictive and relies upon federal agency rules,” said Ossowski.

“Even China – a champion of state involvement in both individual and business practices is yet to finalize its AI Law in 2024 and is unlikely to be strict with AI companies compliance due to their ambition in terms of global AI race. In this context, we have to acknowledge that the EU has to adhere to already existing frameworks for AI regulation, not the other way around,” concluded Ossowski.

The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Ottawa, Washington, Brussels, Geneva, Lima, Brasilia, and other hotspots of regulation and inform and activate consumers to fight for #ConsumerChoice. Learn more at consumerchoicecenter.org.

en_USEN

Follow us

WASHINGTON

712 H St NE PMB 94982
Washington, DC 20002

BRUSSELS

Rond Point Schuman 6, Box 5 Brussels, 1040, Belgium

LONDON

Golden Cross House, 8 Duncannon Street
London, WC2N 4JF, UK

KUALA LUMPUR

Block D, Platinum Sentral, Jalan Stesen Sentral 2, Level 3 - 5 Kuala Lumpur, 50470, Malaysia

OTTAWA

718-170 Laurier Ave W Ottawa, ON K1P 5V5

© COPYRIGHT 2025, CONSUMER CHOICE CENTER

Also from the Consumer Choice Center: ConsumerChamps.EU | FreeTrade4us.org