Artificial Intelligence

The Danger of Exaggerating China’s Technological Prowess


The U.S.-China relationship will be the great geopolitical rivalry of the early 21st century, and every facet of the competition will involve the two big powers’ capabilities in science and technology. Figures from across the political spectrum worry about a technology race with China, and many Americans fear that China has already surpassed us in such frontier technologies as artificial intelligence and 5G broadband communications. “China has stolen a march and is now leading in 5G,” then-Attorney General William Barr declared in a recent keynote speech at a Justice Department conference on China. Graham Allison of Harvard University warns that China “is currently on a trajectory to overtake the United States in the decade ahead” in artificial intelligence.

The conventional wisdom about China’s supposed advantages in AI and 5G shows how easy it is for incomplete understanding of technologies to lead to misjudgments and policy mistakes. Balancing economic and security considerations requires considerable knowledge of specific technologies—not just a current snapshot but also a sense of how the fundamentals will shape their evolution. We believe that the most effective U.S. policies will pair openness to China with scrupulous efforts to manage the risks posed by specific technologies.

Let’s start with AI, where outdated analogies have led to wrongheaded policies. Prof. Allison has dubbed China “the Saudi Arabia of the twenty-first century’s most valuable commodity”: data. But this fashionable metaphor implies that China’s larger supply of data—garnered from its more than one billion people, with very limited privacy protection—gives it a big advantage. Chinese machine-learning algorithms can be trained on far larger data sets, this line of thinking contends, and can thus advance more quickly and powerfully than their American counterparts.

This assessment makes two fundamental errors. For one, data aren’t interchangeable. Machine learning depends on specialized data sets, not mountains of undifferentiated data points. For another, this argument ignores the law of diminishing returns. Infinitely larger supplies of an input like data don’t produce infinitely better results; indeed, they may actually reduce performance. For many AI tasks, machine simulations are more productive than mountains of data.

When people think of AI functions that must be sequestered from China, they are often thinking of AI as a specific device or program, like HAL, the omniscient computer in the movie “2001: A Space Odyssey.” But AI is actually a variety of procedures applied to different tasks. Almost all AI research is public and conducted by a global community of researchers. Only a very few applications for specialized security tasks need to be classified and subject to export controls.


Source link

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *