Archimag : The specialized reference in digital information management
According to the journal Nature, nearly 10,000 articles were retracted across multiple publications in 2023 alone due to fraud. A figure widely considered to be underestimated by experts in scientific publishing, who see it as only the visible part of a much larger problem.
The issue has become serious enough for Science to announce, in 2024, the adoption of two tools designed to detect scientific misconduct: Proofig AI, for image manipulation, and iThenticate, for plagiarism detection. These tools are now embedded in editorial processes to screen manuscripts prior to publication.
Detecting manipulation before publication
Proofig AI relies on machine learning algorithms to analyse millions of data points within an image. It can identify alterations, data removal or suspicious duplications in scientific figures. At the end of its analysis, the software generates a report highlighting anomalies and potentially manipulated areas. This report is then reviewed by editors, research integrity committees or peer reviewers, who carry out a human assessment of the findings.
Developed by Turnitin, iThenticate is capable of detecting plagiarism at sentence or paragraph level using advanced algorithms. Its use is straightforward: editors upload a document (PDF or text file), and the system produces a similarity report, including an overall match percentage, links to original sources and comparison tools.
AI cannot be an author
For Holden Thorp, Editor-in-Chief of Science, these tools reinforce a non-negotiable principle: "an AI system cannot be considered as an author."
The journal has updated its editorial policy accordingly, sending a clear message to researchers: any breach of this rule is considered scientific misconduct, on the same level as image manipulation or plagiarism.
Fabricated references and misleading outputs
On its side, the French Office for Research Integrity (Ofis) has outlined a list of acceptable uses of AI in scientific publishing. These include summarising articles, defining research questions, developing and structuring arguments, compiling bibliographies, selecting relevant literature, and writing or optimising code. This list, however, may appear permissive to some and restrictive to others.
The French Office for Research Integrity, established in 2017, also issues a warning: generative AI systems can produce convincing but incorrect - or even entirely fabricated - information. This has been observed, for example, in literature reviews, bibliographic references, and responses to scientific questions.
Such limitations expose researchers to the risk of disseminating inaccurate information, or even engaging in fabrication or falsification.
Towards evolving guidelines
The position of the Ofis aligns with the code of conduct published by the European Federation of Academies of Sciences and Humanities. The document clearly states what is prohibited, including "concealing the use of AI or automated tools in content creation or in the drafting of publications." It also encourages researchers to disclose the AI tools they have used.
These guidelines will likely need regular updates, as artificial intelligence continues to evolve - and, according to many observers, is still only at an early stage of development.








