海角大神

海角大神 / Text

Can AI be 鈥榙emocratic鈥? Race is on for who will define the technology鈥檚 future.

An American AI company is working to promote 鈥渄emocratic鈥 artificial intelligence. It鈥檚 not yet clear what that means or how it might influence how the emerging technology is used.

By Erika Page, Staff writer

In January, OpenAI launched Stargate, a $500 billion investment in artificial intelligence infrastructure for the United States. On Wednesday, it announced a plan to bring this type of investment to other countries, too.

OpenAI, the company behind ChatGPT, says that by partnering with interested governments, it can spread what it calls 鈥渄emocratic AI鈥 around the world. It views this as an antidote to the development of AI in authoritarian countries that might use it for surveillance or cyberattacks.

Yet the meaning of 鈥渄emocratic AI鈥 is elusive. Artificial intelligence is being used for everything from personal assistants to national security, and experts say the models behind it are neither democratic nor authoritarian by nature. They merely reflect the data they are trained on. How AI affects politics worldwide will depend on who has a say in controlling the data and rules behind these tools, and how they are used.

OpenAI wants as many people as possible using AI, says Scott Timcke, senior research associate聽at Research ICT Africa and the University of Johannesburg鈥檚 Centre for Social Change. 鈥淚 don鈥檛 necessarily get the sense [they are] thinking about mass participation at the level of design, or coding, or decision-making.鈥

Those sorts of decisions are shaping how AI permeates society, from the social media algorithms that can influence political races to the chatbots transforming how students learn.

He says people should consider, 鈥淲hat is our collective control over how these big scientific instruments are used in everyday life?鈥

鈥淎 challenge ... about exporting values鈥

In a blog post announcing the new initiative, OpenAI for Countries, democratic AI is defined as artificial intelligence that 鈥減rotects and incorporates long-standing democratic principles,鈥 such as freedom for people to choose how they use AI, limits on government control, and the free market.

Working with the U.S. government, OpenAI is now offering to invest in the AI infrastructure of countries wishing to partner. That means building new data centers, providing locally customized ChatGPT, and opening national business startup funds, while promising security controls in line with democratic values.

The Trump administration has been adamant about winning the AI race against China, which already has some of the leading firms in the field. Through the expansion of 鈥渇riendly鈥 AI in allied nations, OpenAI is becoming a major player in U.S. efforts to beat China and Russia in the technological race.

鈥淭he challenge of who will lead on AI is not just about exporting technology, it鈥檚 about exporting the values that the technology upholds,鈥 wrote OpenAI CEO Sam Altman in a Washington Post op-ed last year.

While the project may prove attractive to some governments, it also raises concerns about building an AI ecosystem whose infrastructure is controlled by American interests.

Others wonder whether the technology can be as democratic as companies like OpenAI hope. One foundation of democracy is transparency, where people have access to information to help them understand the processes behind decision-making.

Many AI models are opaque, operating as 鈥渂lack boxes鈥 whose inner workings are a mystery to most users. In some cases, these processes are concealed to protect intellectual property. And some algorithms are so complex that even developers don鈥檛 understand exactly how the machines arrive at their results.

That can make it difficult to trust the output of these models or to hold anyone accountable when things go wrong.

How transparent should AI technology be?

Companies could choose to make the code behind the systems available to everyone.

While OpenAI, Google, and Anthropic do not share their models, other companies have chosen the open-source path. The Chinese DeepSeek-R1 model, released this January, has enabled many developers around the world to build small-scale, cheap AI models. Some see this as a way of democratizing the development of AI technology, making it more accessible to more people.

These tools can also bolster democratic participation. During Kenya鈥檚 mass protests last year, demonstrators created chatbots to explain complex legislation in plain language to help their peers understand its impact.

Others worry that without the right regulations, making AI widely accessible may do more harm than good. They point to AI-generated disinformation campaigns sowing division and confusion. And given how quickly the technology advances, private companies are setting their own rules of conduct and doing so faster than regulators can keep up 鈥 similar to how the internet has developed.

Just a handful of companies, such as OpenAI, Microsoft, Google, and Nvidia, control much of the critical hardware and software for AI鈥檚 current expansion. That has led to a push from researchers, nonprofits, and others for more public input and oversight.

The Collective Intelligence Project, which describes itself as a lab that designs 鈥渘ew governance models for transformative technology,鈥 is partnering with leading AI companies and governments wanting to 鈥渄emocratize鈥 AI by bringing a broader range of voices into the conversation. They worked with Anthropic, maker of the chatbot Claude, to create a model trained on rules and values offered by 1,000 Americans from all walks of life 鈥 not just a group of software engineers.

Analysts also point out that many AI tools can be used to strengthen democracy, from digital identity documents to government service delivery.

鈥淚 don鈥檛 think we need to be scared of AI,鈥 says Dr. Timcke, the research associate. 鈥淚 think we need to be scared when there鈥檚 just so much power concentration in AI. ... Who鈥檚 controlling it? And do they have anyone overseeing them?鈥

Editor's note: This article, originally published May 10, was corrected on May 12 to give $500 billion as the planned size of聽the Stargate artificial intelligence project.