Technology and Economics: A High-Powered Partnership

Thomas Cao scrutinizes technology policy as an economist
A headshot of Thomas Cao in front of a blurred, green background outdoors

Thomas Cao is an interdisciplinary political economist. He uses his training as a social scientist to bring greater scrutiny to economic policy. 

As assistant professor of technology policy, Cao works at the intersection of technology and political behavior in democratic and authoritarian contexts. He examines how artificial intelligence shapes political opinions, and he investigates how information more broadly affects people and what kinds of regulations governments impose upon the information landscape. 

Information and Technology Reshape Society 

Cao’s interest in these social and political issues is, in part, personal. 

“Growing up in China, I was quite affected when I first saw that information on Wikipedia was blocked," said Cao. “It was the first moment that generated my interest in social inquiry, and it ignited my interest in how information and related technology can affect how we think about our society.”

Cao was a Rhodes Scholar at Oxford University, where he earned his master’s at the Oxford Internet Institute, and trained as a political economist at the Stanford Graduate School of Business. During his undergraduate years at Stanford, Cao began to see how contemporary social research is often data-driven and how such a methodology can allow scholars to identify important causal impacts. 

“That was really the moment when I realized I could pursue this path professionally, because it combines my interest in social science topics with this rigorous, scientific approach of inquiry,” he said. 

Cao paid close attention to the deep learning revolution of the early 2010s as it unfolded in Silicon Valley. This led him to identify artificial intelligence as a particular area of research focus. 

“I saw the first demonstration of the power of artificial intelligence based on the big data and computational power that we had access to,” said Cao. “That was unprecedented. I was really amazed by the progress with predictive power that a lot of those models could achieve.” 

“This was going to shape how we think about a lot of the social, political issues,” he added.

U.S. and China - Case Studies in Content Moderation

In his research, Cao dissects the power of technology and its relationship to policy and public perceptions. He has applied his inquiry to both China and the United States. In both contexts, Cao seeks to understand how people respond to censorship and the circumstances in which they agitate in favor of it. 

In China, he studied the impact of censorship combined with propaganda to sway people’s beliefs.  

“We looked at what happens when people are exposed to social media comment sections that were moderated by state-sponsored propaganda accounts,” said Cao.

“If you can only see positive comments supporting the regime's opinion, while your own opinions might not change, you inflate your second order belief – how many other people you believe support these policies and the regime. This has implications on how we understand authoritarian politics,” he added.

In the U.S., Cao seeks to understand the conditions under which users ask for increased content moderation. Juxtaposing these two cases, Cao sees that people’s relationship to content moderation is much more complex than many people often believe it to be. 

“Conventional wisdom is that many people in the U.S. want content moderation policies on social media to censor fake news or false information,” said Cao. “My research focuses on the fact that it is not just the veracity of information, but also about the perception of the negative externalities that could be generated by the information, regardless of whether it's true or false.”

An Interdisciplinary Approach to Technology Policy 

In his course, “AI: Algorithms, Ethics, Policy,” this fall, he’ll bring this level of nuance and scrutiny to bear on artificial intelligence. He’ll lead students through an investigation of the ethical and political impacts of AI by diving into both its technical aspects and its social ramifications. 

“I want to strike a balance between the two by providing an accessible technical foundation, so that students understand what these models are doing and some of the potential weaknesses that merit ethical and policy discussions,” he said.

As he looks to the semester ahead, he’s particularly eager to be joining an interdisciplinary academic community where he can bring such nuance to bear on technology. 

“I really appreciate the interdisciplinary nature of the school,” he said. “Technology policy is interdisciplinary, and here I can work with economists, political scientists, computer scientists, and legal scholars on these issues.”

Read more about Fletcher’s MALD: International & Development Economics degree program.