fbpx

The global race to develop artificial intelligence is the most consequential contest since “the space race” between the United States and the Soviet Union. The development of these tools and this industry will have untold effects on future innovation and our way of life.

The White House will soon unveil its anticipated executive order on AI, which may include a commission to develop an “AI Bill of Rights” or even form a new federal regulatory agency. In this case, the government is playing catch-up with AI innovators and ethicists.

AI in a democratic society does not mean spinning up federal AI agencies staffed by whoever won the most recent election — it means having a wide range of policies and rules made for the people, by the people, and that are responsive to the people.

AI has an almost unlimited potential to change the world. Understandably, this makes many people nervous, but we must resist handing over its future to the government at this early stage. After all, this is the same institution that has not cracked 30% in overall trust to “do the right thing most or all of the time” since 2007. The rules of the road can evolve from the people themselves, from innovators to consumers of AI and its byproducts.

Besides, does anyone really believe a government that is trying to wrap its regulatory mind around the business model and existence of Amazon Prime is prepared to govern artificial intelligence?

For an example of the rigor required to develop rules for AI in a free society, consider the recent researchpublished by Anthropic, an Amazon-backed AI startup known for the Claude generative AI chatbot. Anthropic is developing what’s known as “Constitutional AI” that looks at the question of bias as a matter of transparency. The technology is governed by a published list of moral commitments and ethical considerations.

If a user is puzzled by one of Claude’s outputs or limitations, he or she can look to the AI’s constitution for an explanation. It’s a self-contained experiment in liberalism.

As any American knows, living in a functional constitutional democracy is as clarifying as it is frustrating. You have specific rights and implied rights under American law, and when they’re violated, you can take the matter to court. The rights we have are as frustrating to some as the ones we don’t: the right to keep and bear arms, for example, along with the absence of a clear constitutional right to healthcare.

Anthropic surveyed 1,094 people and broke them up into two response groups based on discernible patterns in their way of thinking about a handful of topics. There were many unifying beliefs about what AI should aim to do.

Most people (90% or more) agree that AI should not say racist or sexist things, AI shouldn’t cause harm to the user or anyone else, and AI should not be threatening or aggressive. There was also broad agreement (60%) that AI should not be programmed as an ordained minister — though with 23% in favor and 15% undecided, that leaves quite the opening in the AI space for someone to develop a fully functional priest chatbot. Just saying.

But even agreement can be deceiving. The yearslong national debate over critical race theorydiversity, equity, and inclusion, and “wokeness” stands as evidence that people don’t really agree on what “racism” means. AI developers such as Anthropic will have to choose or create a definition that encompasses a broad view of “racism” and “sexism.” We also know that the public does not even agree on what constitutes threatening speech.

The single most divisive statement, “the AI shouldn’t be censored at all,” shows how cautious consumers are about AI having any kind of programmed bias or set of prerogatives. With a close to 50/50 split on the question, we’re a long way from when Congress could be trusted to develop guardrails that protect consumer’s speech and access to accurate information — much less so the White House.

Anthropic categorizes the individual responses as the basis for its “public principles” and goes to great lengths to show how public preferences overlap and diverge from its own. The White House and would-be regulators are not showing anywhere near this kind of a commitment to public input.

When you go to the people through elected legislatures, you find out interesting things to inform policy. The public tends to focus on maximized results for AI queries, such as saying a response should be the “most” honest or the “most” balanced. Anthropic tends to value the opposite, asking AI to avoid undesirables by asking for the “least” dishonest response or “least” likely to be interpreted as legal advice.

We all want AI to work for us, not against us. But what America needs to realize is that the natural discomfort with this emerging technology does not necessitate government action. The innovation is unfurling before our very eyes, and there will be natural checks on its evolution from competitors and consumers alike. Rather than rushing to impose a flawed regulatory model at the federal level, we should seek to enforce our existing laws where necessary and allow regulatory competition to follow the innovation rather than attempt to direct it.

Originally published here

Share

Follow:

More Posts

Subscribe to our Newsletter

Scroll to top
en_USEN