海角大神

海角大神 / Text

In race to dominate AI, US researchers debate collaboration with China

More oversight is needed over boundary-crossing research on artificial intelligence, experts say. But collaboration on beneficial AI is essential too.

By Ann Scott Tyson, Staff writer
Seattle

From a corner office on the top floor of the University of Washington鈥檚 Physics and Astronomy Tower, data scientist Bernease Herman looks out on Seattle鈥檚 Portage Bay as it flows toward the city鈥檚 high-tech hub and Amazon headquarters.

A former Amazon employee, Ms. Herman decided to join dozens of prominent artificial intelligence (AI) researchers last month in urging Amazon to stop selling a facial recognition technology to law enforcement. She was troubled by a study by a Massachusetts Institute of Technology researcher that found that聽Amazon Rekognition technology is biased because it is less accurate in identifying women and people of color and risks being misused by police to infringe on civil liberties.

After Amazon disputed the studies, Ms. Herman felt compelled to join other AI researchers in speaking out. Raising concerns about possible risks from AI 鈥渋s a primary part of all of our roles,鈥 says Ms. Herman, who researches bias in the AI field of machine learning at the UW鈥檚 eScience Institute.

鈥淲e are the first line of defense,鈥 she says, underscoring AI researchers鈥 sense of responsibility to weigh AI advances in light of the public good.

Across the Pacific Ocean in China, in contrast, debate over topics such as Chinese security forces鈥 use of facial recognition technology to target minority groups remains 鈥渃ompletely off limits鈥 for AI scientists, says Jeffrey Ding, the lead China researcher at the Center for the Governance of AI at the University of Oxford鈥檚 Future of Humanity Institute.聽

That鈥檚 not to say there aren鈥檛 signs of defiance. In March, for example, China鈥檚 overworked software developers made use of GitHub, an encrypted code-sharing platform owned by Microsoft, to demand relief from their grueling work regimen known as 鈥996鈥澛犫 9 to 9, six days a week. The protest went viral. Dozens of Microsoft employees rallied in support, issuing a petition urging their company to keep the platform open even if it came under pressure from Beijing.聽(China鈥檚 government is reluctant to censor GitHub, however, because of the critical code-sharing service provided by the platform.)

鈥淭here is more dissent than we can see on the surface level, and sometimes it bubbles up,鈥 Mr. Ding says. Still, unlike in the United States, in China 鈥渢here is no robust civil society pushing very strongly on this,鈥 he says.

As the U.S. and China forge ahead as world powerhouses in the development and application of AI, the cautionary voices of researchers 鈥 and their choices about collaboration聽鈥 could hold the key to promoting beneficial cooperation while preventing malicious or dangerous uses of the revolutionary and often unwieldy new technology, AI experts say.

In turn, their ability to discern between constructive and harmful AI sharing could help prevent a widening technological schism between the U.S. and China that, if allowed to grow, could spread globally as nations are forced to decide whether to align with the world鈥檚 leading democracy or its most populous authoritarian state.

High-tech battleground

AI collaboration between the two tech leaders is coming under scrutiny in Congress and elsewhere, as Washington and Beijing view one another increasingly as strategic competitors, casting the race for AI as a critical battleground. Both countries enjoy unique strengths. The U.S. leads in research talent, critical hardware, and AI companies. China has accumulated far more of the data needed to fuel AI and seeks to lead the world in AI by 2030.

China鈥檚 military buildup and intensifying political repression under President Xi Jinping have increased concern in Washington. 鈥淎rtificial intelligence as a technology presents enormous economic benefits but also potentially enables military capabilities and ... surveillance,鈥 says Elsa Kania, an expert on Chinese military technology at the Center for a New American Security, a D.C. think tank. 鈥淚t鈥檚 clear there are a number of ethical and security concerns that come into play when we are talking about research collaborations.鈥

U.S. universities and companies have collaborated, sometimes unknowingly, with Chinese scholars who are actually military officers or affiliated with Chinese military universities. 鈥淭here is a shockingly small amount of due diligence and oversight,鈥 says Alex Joske, a researcher with the International Cyber Policy Center of the Australian Strategic Policy Institute. China has sent approximately 500 military scientists to U.S. universities since 2007, an estimate based on analysis of peer-reviewed publications co-authored by China鈥檚 People鈥檚 Liberation Army (PLA) scientists and overseas scientists, Mr. Joske says.聽

Last month, reports surfaced that academics at Microsoft Research Asia in Beijing had co-written papers with researchers affiliated with China鈥檚 National University of Defense Technology on AI methods that can be used for surveillance. Over two decades, Microsoft Research Asia has trained hundreds of top Chinese IT professionals and academics and graduated 7,000 alumni, many in the field of AI.

While analysts say most research collaboration doesn鈥檛 have specialized military applicability, they agree that U.S. researchers should avoid working with PLA scientists.

鈥淎nything involving the Chinese military should be a bright red line,鈥 Ms. Kania says.聽

Risks for citizens聽

Another area of concern is China鈥檚 use of artificial intelligence in surveillance, targeted propaganda, and enhanced censorship. For example, China鈥檚 nearly ubiquitous WeChat app last year began using an AI technology called optical character recognition to filter and delete images containing sensitive words, stifling a method that Chinese netizens used to evade censorship.

鈥淲ithout artificial intelligence, you need a big army of human censors to identify and delete,鈥 says Sarah Cook, a senior research analyst for East Asia at Freedom House, an independent watchdog organization that promotes democracy. 鈥淭his is a way to refine censorship and cut off ways netizens have been able to circumvent censorship of keywords, and this is much cheaper, too.鈥澛

Given AI鈥檚 potential for military, surveillance, and censorship use, 鈥淯.S. companies and researchers need to be having uncomfortable conversations internally: What is the nature of the technology and the field? How do we have these conversations without cutting off collaboration?鈥 says Samm Sacks, an expert on communication technology policy in China at New America, a nonpartisan think tank in Washington.

Indeed, Ms. Sacks and other AI experts note that amid concerns over malicious AI uses, many examples of benign and positive AI cooperation are overlooked.聽

鈥淲hat doesn鈥檛 get reported are AI researchers in the U.S. and China working on collaborative聽projects that are beneficial to society, for example on the health care front,鈥 says Baobao Zhang, a research affiliate at the Center for the Governance of AI and a doctoral candidate in political science at Yale University. One joint project used AI to help diagnose illness in children, including toddlers, according to a study published this year.聽

Food security is another area of fruitful AI cooperation. For example, Microsoft, Intel, and the Chinese tech giant Tencent last year took part in a cucumber-growing contest, exploring how AI could raise greenhouse productivity and advance indoor farming.

Oversight聽鈥 and a welcome mat

As U.S. policymakers look for tools to prevent the leakage of sensitive AI know-how, they face unique challenges in efforts to regulate this and other related emerging technologies.

Congress has called for the modernization of U.S. export controls that originated during the Cold War and the strengthening of oversight of technology transfer to China. Yet this is challenging due to the two countries鈥 deep economic connections and the transnational and commercial nature of most AI research and innovation. Given this 鈥渢echnological entanglement,鈥 Ms. Kania says that blunt tools that risk cutting the two-way flow of expertise are counterproductive. 鈥淎pplying export controls to algorithms is antithetical.鈥

More effective tools to prevent unwanted AI transfer include improving cybersecurity protections, using expanded foreign investment rules to block risky foreign acquisitions of critical technology, and taking legal action against technology theft. Universities, moreover, should enforce regulations on foreign nationals conducting research in sensitive areas, AI experts recommend.

But stark responses, such as denying U.S. visas to Chinese scholars and researchers, are likely to backfire, they warn, especially given that by some measures more than half of the U.S. top talent pool in AI is made up of foreign nationals. 鈥淣ews about Chinese students and researchers whose visas are denied has a chilling effect on the U.S. ability to draw top AI talent,鈥 says Ms. Zhang.

Instead, the U.S. should leverage its strengths by boosting investments in AI research while expanding innovation and openness as well as diversity and inclusion. This includes taking steps to ensure talent from China stays in the U.S. by welcoming students and scholars and pushing back against organizations such as Beijing-backed student associations that experts say surveil and coerce Chinese academics in the U.S. Indeed, the overworked Chinese tech workers who turned to GitHub to evade China鈥檚 censorship 鈥渇irewall鈥 would clearly appreciate both more freedom of expression and fewer working hours.

鈥淎t a time when China is becoming more repressive under Xi Jinping, there is an opportunity for welcoming Chinese students, scientists, and entrepreneurs who don鈥檛 feel they can pursue research or build companies with as much freedom,鈥 says Ms. Kania. 鈥淚t鈥檚 a tragedy for China but can be framed as an opportunity for the U.S. to welcome some of the more free-thinking entrepreneurs.鈥

Working on machine-learning models at her UW office tower, Ms. Herman emphasizes that broad discretion in research and knowledge of who the end users will be lie at the heart of protecting U.S. AI technology and preventing misuse.

She points to a recent example: When the San Francisco nonprofit OpenAI developed an AI system capable of writing articles on any subject based upon a brief prompt, it broke with its usual practice of releasing the full model so as not to unleash a capability that could flood the internet with fake news.

鈥淚t鈥檚 certainly something that academics think about a lot 鈥 what鈥檚 the use of their technology,鈥 says Ms. Herman. 鈥淚t鈥檚 a pretty painstaking decision process.鈥澛