Academic Writing, Submission, and Publication in the Era of AI Disruption
Time:2025-06-17       

On May 20, 2025, the Fudan Institute for Global Public Policy (IGPP) held a roundtable discussion titled “Academic Writing, Submission, and Publishing in the Era of AI Disruption.” The session was moderated by Assistant Professor Meijun Liu of IGPP.

In his opening remarks, Dean Yijia Jing of IGPP emphasized the growing importance of AI and data-driven approaches. He noted that while AI technologies bring significant conveniences to research and academia, they also pose new challenges. He briefly introduced the five keynote speakers and outlined three core themes.

Senior scientist Robin Haunschild from the Max Planck Society delivered an in-depth analysis of the ethical challenges posed by AI-assisted writing and the potential innovations in peer review. He emphasized the need to distinguish between AI-assisted writing and AI-generated writing, and identified several “red flags” that help detect AI-generated academic content. Professor Haunschild recommended that authors include a clear AI usage statement upon submission. For the review process, he supported basic checks by AI tools, supplemented by human verification. He insisted that the “final judgment must remain human,” arguing that AI tools can only provide leads and should not replace editorial discernment to avoid false accusations of academic misconduct.

Professor Li Tang of Fudan University discussed the urgent need for academic journals to establish policy frameworks in response to AI advancements. Using Quantitative Science Studies and Technological Forecasting and Social Change as case studies, she highlighted the diverse concerns across fields when addressing AI. Professor Tang pointed out that many journals currently lack explicit disclosure policies on AI usage. She advocated for encouraging authors to proactively report details about AI tools employed in their research to uphold academic integrity. She also emphasized the need for clear classification standards to regulate AI use and called for a rethinking of the peer review process to address the “double-edged” effect of AI—enhanced efficiency paired with risks of misconduct.

Professor Xi Lin of Fudan University systematically explained the potential applications, challenges, and practical pathways of AI in the peer review process. He outlined several stages where AI could optimize workflows. Professor Lin emphasized key challenges specific to humanities and social sciences: algorithmic bias, difficulties in understanding deep contextual meaning, intellectual property issues, data security, personal privacy protection, and the “black box” problem in AI decision-making. To address these, journals need to establish transparent AI usage policies, provide targeted training for editorial teams, and implement AI use in small-scale pilot projects before broader application.

Professor Haozhi Pan from Shanghai Jiao Tong University issued a warning about the risks of over-reliance on AI in academic publishing, which could lead to a vicious cycle and ecological crisis in scholarly research. He criticized the “fully AI-driven” model, where AI dominates all stages including writing, reviewing, and proofreading, resulting in a flood of repetitive and meaningless studies. Professor Pan stressed the need to avoid such a closed-loop system. He advocated for clearly defining AI’s role as strictly auxiliary during writing, preventing it from replacing humans in critical research phases. Researchers must maintain control over key judgments and decisions in knowledge production. He called for vigilance in the academic community against technological alienation to safeguard originality and diversity in the scholarly ecosystem.

Associate Professor Zhiteng Fan of IGPP analyzed the current challenges and potential improvements in AI policies in academic publishing, using the journal Global Public Policy and Governance as a case study. He identified two major pain points in existing AI use policies: a general lack of motivation among authors to proactively disclose AI usage, and the academic community’s lack of consensus on the appropriate role of AI in research and writing. To address these issues, he introduced innovative efforts by Public Administration Review, including the establishment of a clear “positive and negative list” defining legitimate AI uses, the implementation of a “self-report checklist” requiring authors to detail their AI usage in footnotes, and plans to develop or adopt AI detection tools to enhance the reliability of the review process.

During the discussion, speakers and participating faculty and students engaged deeply on topics such as “AI and human agency,” “training and regulation for AI usage,” and “distinguishing differences between AI-generated texts in Chinese and English.” This seminar invigorated the conversation on AI usage standards and offered valuable insights into balancing technological innovation with academic integrity.