Artificial intelligence (AI) is now established as an essential tool in the field of scholarly publishing in the humanities and social sciences (HSS). Its applications are multiple and directly affect writing, revision, and plagiarism detection practices. In writing, AI software can assist researchers by proposing reformulations, improving stylistic clarity, or suggesting relevant references. These tools save time and optimize the linguistic quality of texts, particularly in a bilingual context where automatic translation and terminological adaptation play a crucial role. However, their use must remain regulated to preserve the originality and responsibility of the author.
In revision, AI offers innovative solutions for analyzing the coherence of arguments, detecting repetitions, or identifying methodological flaws. Some software is capable of comparing a manuscript with existing corpora to spot inconsistencies or omissions. This technical assistance can strengthen scientific rigor, but it does not replace the critical judgment of peers, which remains essential in HSS.
Plagiarism detection constitutes another major field of application. Specialized software, powered by massive databases, enables rapid identification of unattributed borrowings or suspicious similarities. They have become standard tools in universities and publishing houses, contributing to establishing a culture of transparency and accountability. AI improves the accuracy of these detections by refining comparison algorithms and taking reformulations into account.
AI tools offer considerable opportunities to improve the quality and efficiency of publishing processes. They facilitate writing, support revision, and strengthen the fight against plagiarism. But their use must be thoughtful: AI must remain a support and not a substitute for the creativity, responsibility, and critical judgment of researchers.
While AI opens new perspectives for scholarly publishing, it also raises major risks and challenges. The first concerns bias. AI algorithms are trained on existing corpora that reflect social, cultural, or linguistic inequalities. In HSS, this can lead to reproducing stereotypes or marginalizing certain voices. For example, a writing tool may favor formulations from Anglophone traditions, to the detriment of linguistic and cultural diversity.
Algorithm opacity constitutes another challenge. Researchers and publishers use software whose internal functioning often remains inaccessible. This "black box" makes it difficult to evaluate the reliability of results and raises questions of accountability. How can transparency be guaranteed if one cannot explain why a text was reformulated or why a plagiarism detection was flagged? This opacity undermines trust and complicates the integration of AI into academic practices.
Text overproduction is also a worrying consequence. AI enables rapid content generation, which can encourage an inflation of publications without real scientific value. In HSS, where critical reflection and contextualization are essential, this trend risks diluting quality and saturating editorial spaces. Journals may find themselves overwhelmed by artificially produced manuscripts, making selection and evaluation work more difficult.
These challenges require increased vigilance. Researchers must be aware of the limits of the tools they use, and institutions must establish regulatory mechanisms. AI must not become a factor that undermines academic integrity but rather a lever to strengthen quality and transparency. This requires collective reflection on bias, opacity, and overproduction, in order to preserve the value and credibility of research in HSS.
In response to the rise of AI in scholarly publishing, standards and recommendations have been developed at international and national levels to regulate its use. International guidelines, such as those proposed by the Committee on Publication Ethics (COPE) or by UNESCO, emphasize transparency, accountability, and fairness. They recommend that the use of AI tools be clearly mentioned in publications, to ensure traceability and preserve reader trust.
In Canada, funding agencies such as the Social Sciences and Humanities Research Council (SSHRC) and universities are beginning to integrate specific directives on AI use. These recommendations aim to ensure that researchers use these tools responsibly, respecting ethical standards and avoiding excessive dependence. They emphasize training: students and researchers must be made aware of the advantages and limitations of AI, in order to integrate it critically into their practices.
Canadian scholarly associations also play an important role. They publish practical guides and organize workshops to discuss the implications of AI in research and publishing. These initiatives contribute to harmonizing practices and strengthening the culture of academic integrity.
Ultimately, standards and recommendations do not seek to prohibit AI use but to regulate it. They remind us that AI must remain a tool in service of research, not a substitute for critical thinking and creativity. They insist on the need to declare its use, respect principles of transparency, and guarantee equity among researchers. In a bilingual and multicultural context like Canada's, these recommendations take on a particular dimension: they must ensure that AI does not reproduce linguistic or cultural biases but contributes to enriching the diversity of knowledge.
Contemporary debates around AI in scholarly publishing in HSS are numerous and reflect the complexity of ethical issues. Some researchers see AI as an opportunity to strengthen the quality and efficiency of editorial processes, while others worry about possible abuses.
A first line of reflection concerns AI's place in scientific creativity. Can a text generated or heavily assisted by AI be considered an authentic academic production? This question raises issues of responsibility and recognition. Researchers must remain the primary authors of their work, and AI can only be an assistance tool.
A second debate concerns equity. Access to AI tools is not uniform: some institutions have advanced resources, while others struggle to keep up. This inequality can accentuate disparities among researchers and undermine the diversity of voices in HSS.
The question of ethics is also central. AI relies on algorithms whose biases and opacity can compromise the reliability of results. Researchers must question the legitimacy of using tools whose functioning remains partially unknown. Moreover, the overproduction of AI-generated texts poses a problem of saturation and dilution of scientific quality.
Finally, contemporary debates emphasize the need for interdisciplinary dialogue. HSS has a particular role to play: it can analyze the social, cultural, and political impacts of AI and propose critical frameworks for its use. AI is not only a technical question: it involves values, responsibilities, and visions of knowledge.
Ultimately, critical reflections on AI and research ethics remind us that innovation must always be accompanied by ethical vigilance. AI can enrich scholarly publishing, but it must be used with discernment, transparency, and responsibility, in order to preserve the integrity and credibility of research in HSS.