On March 18, 2024, the Fudan Institute for Global Public Policy and the Fudan-Arab Research Centre for Global Development and Governance organized the fourth lecture of the Fudan-Arab Lecture Series. The lecture was delivered by Professor Dwayne Woods from Purdue University, with the topic Is a Global Policy Regime for AI Possible? Professor Yijia Jing, the Dean of the Institute for Global Public Policy, chaired the lecture, and Assistant Professor Ziteng Fan served as the commentator.
Professor Woods analyzed potential competitive and cooperative practices among countries in global AI governance using the stag hunt game thoery. He pointed out the core dilemma of the global AI policy system through two Nash equilibrium models: Both Collaborate (CC) and Both Defect (DD). While collective action and cooperation can generate the optimal overall outcome, mistrust and fear of being harmed may drive countries to adopt self-interest strategies, ultimately leading to a universally suboptimal state. He used the Q-learning algorithm model to calculate the action combinations of China and the United States in global AI governance and found that risk factors such as technological competition, economic decoupling, ideological differences, and geopolitical tensions would contribute to the emergence of the DD equilibrium. When expanding to a multi-agent model, the fitted results showed that the probability of CC equilibrium increased as the perceived threat level and cooperative rewards of the agents increased. Therefore, strategies that utilize national risk perception, intergovernmental communication, and consensus-building are crucial in constructing a global artificial intelligence policy system.
Assistant Professor Fan then provided comments on the lecture and raised questions about the role and function of NGOs in promoting global AI governance cooperation. Students and faculty engaged in discussions with Professor Woods on topics such as the European Union's practices in AI governance and the intertwining of AI risks with other risks.