1. Preamble and Scope

This Policy establishes InScriptum‘s editorial guidance on the use of Artificial Intelligence (AI) and AI-assisted technologies in scholarly submissions and peer reviews. The Policy applies to all authors, reviewers and editorial team members involved in the publication workflow.

For the purposes of this Policy, the terms “AI tools” or “generative AI” refer to Large Language Models (LLMs) and other computational technologies capable of generating, analysing, synthesising, or substantially modifying text, images, data or other content. Examples include ChatGPT, Claude, Gemini and similar systems.

This Policy is designed to promote transparency, maintain research integrity, protect intellectual property and facilitate responsible advancement of scholarly communications in an AI-enabled academic landscape.

 

  1. Core Principles

InScriptum operates according to the following foundational principles regarding AI:

2.1 Human Accountability

Authorship remains fundamentally a human responsibility. Authors bear full responsibility and accountability for all content in manuscripts, irrespective of whether AI assistance was utilised in its creation or refinement: the accuracy, originality, and comprehensiveness of all cited sources, data interpretations and claims, including those generated or enhanced with AI assistance.

2.2 Transparency and Disclosure

All use of AI tools during manuscript preparation, research design or image generation must be clearly and comprehensively disclosed.

2.3 Research Integrity

AI may enhance, rather than substitute for, rigorous research methodology and critical thinking. Substantive research decisions must result from human expertise and evaluation.

2.4 Confidentiality and Data Protection

Manuscripts and peer review materials constitute confidential information. Submission of these materials to external AI systems without explicit authorisation breaches author and reviewer confidentiality and may violate intellectual property rights.

 

  1. Policy for Authors

3.1 Permitted Uses of AI Tools

InScriptum recognizes legitimate applications of AI tools in scholarly work:

  • Brainstorming and ideation: generating research ideas, identifying knowledge gaps and exploring research directions;
  • Search and discovery: using AI-enhanced search engines and literature databases to locate relevant sources;
  • Literature synthesis: organising and categorising published literature to identify patterns and trends;
  • Research design support: identifying relevant methodological approaches or analytical frameworks, subject to author expertise and validation;
  • Language and writing quality: grammar checking, style improvement, clarity enhancement and sentence reorganization (provided the scientific content and author’s original contribution remain unchanged); reference and citation management (subject to verification for compliance with the InScriptum stylesheet by the author(s);
  • Data visualization: creating figures and diagrams that accurately represent research findings.

3.2 Prohibited Uses

The following uses of AI are prohibited:

  • Fabricating content: creating references, citations, quotations or factual claims without verification;
  • Generating abstracts or methods: producing these critical sections without human authorial review and revision;
  • Wholesale text generation: replacing human-authored analysis, discussion or conclusions with AI-generated content without substantial human revision and intellectual contribution.

3.3 Mandatory Disclosure

Authors must disclose AI tool use for all instances in which AI was employed in creating or substantially modifying:

  • Substantial portions of the manuscript text;
  • Research data or methodology;
  • Figures, images or graphical content;
  • Supplementary materials.

The disclosure statement is to be submitted together with the manuscript, as a separate file, and must specify:

  • The author’s personal information, including given and family name(s), affiliation(s), ORCID code and email address;
  • The title of the manuscript;
  • AI tool(s) used: full name of the tool, version number and developer/provider;
  • Purpose and extent of use: specific tasks performed and scope of application (e.g., “assisted with organizing the literature review section”, “used for grammar and clarity improvements”, and the like);
  • Human oversight: description of how author reviewed, edited and validated AI output;
  • Responsibility statement: explicit assertion that author takes full responsibility for all content.

Recommended format:

Declaration of AI Tool Use: The author(s) used [TOOL NAME, VERSION] to assist with [SPECIFIC TASKS]. The AI-generated content was reviewed, edited, and substantially revised by the author(s) to ensure accuracy, originality, and alignment with research findings. The author(s) retain full responsibility for the integrity and accuracy of this work.

No disclosure is required for minor grammar and spelling corrections, basic punctuation adjustments and synonym suggestions that do not substantively alter meaning or require verification.

3.4 Data Security and Intellectual Property

Authors must:

  • Review the terms of service for any AI tool before use, ensuring confidentiality and data protection standards;
  • Verify that the AI platform does not claim ownership or licensing rights to submitted materials;
  • Confirm that input data will not be used to train the AI system without explicit consent;
  • Exercise particular caution with personally identifiable information which should not be uploaded to external AI systems.

 

  1. AI Policy for Peer Reviewers

4.1 Prohibited Uses

Peer reviewers must not use AI tools to:

  • Analyse, summarise, or critique submitted manuscripts or portions thereof;
  • Generate or substantially draft peer review reports;
  • Substitute AI analysis for human scholarly judgment;
  • Upload or input manuscript content into external AI systems.

Rationale: Peer review constitutes a confidential scholarly process. Manuscript content, even when anonymised, is privileged information. Use of external AI systems to process manuscript materials risks breaching author confidentiality and intellectual property protections, and may constitute violations of data privacy regulations.

4.2 Limited Permitted Uses

Reviewers may use AI tools for:

  • Language improvement only: using AI-assisted grammar and clarity checking to improve the expression of their completed review (provided no manuscript content is submitted to the AI system);
  • Literature searching: identifying relevant published work to contextualise their review comments.

4.3 Reviewer Accountability

If a reviewer uses any AI assistance in preparing their review, they must:

  • Disclose this use to the journal editors;
  • Confirm that no manuscript content was submitted to AI systems;
  • Verify that all factual claims in their review remain accurate and are supported by their own expertise;
  • Bear full responsibility for the content and accuracy of their review report.

4.4 Decision-Making Integrity

Final editorial decisions regarding manuscript acceptance, rejection, or revision must reflect human judgment and expertise. AI tools may not be used to make or substantially influence editorial decisions regarding individual manuscripts.

 

  1. Authorship and Accountability

5.1 AI Cannot Be an Author

Artificial intelligence systems, models, or agents cannot be listed as authors or co-authors of manuscripts submitted to InScriptum.

Rationale: Authorship implies legal, ethical, and professional responsibility for the work’s content, integrity, and claims. Authorship requires the ability to:

  • Defend the work against criticism;
  • Correct errors or falsifications;
  • Accept legal liability for claims made;
  • Approve the final version for publication;
  • Manage copyright and licensing agreements.

These capacities are uniquely human and cannot be undertaken by AI systems.

5.2 Author Responsibility

All authors remain fully accountable for:

  • The originality and authenticity of their contribution;
  • The accuracy of all data, analyses, and citations;
  • Verification of AI-generated content for factual correctness;
  • Identification and correction of errors, hallucinations, or fabrications in AI output;
  • Compliance with this policy and all other publication ethics standards.

 

  1. Plagiarism, Fabrication, and Research Misconduct

6.1 AI and Plagiarism

Submitting AI-generated text as one’s own original work constitutes plagiarism, regardless of disclosure. Resubmitting AI output with minimal modification violates publication ethics.

6.2 AI Hallucination and Fabrication

AI systems are known to generate plausible but false information, including fabricated citations, fictional research findings and inaccurate quotations. Authors remain responsible for verifying the accuracy of all content. Submission of manuscripts containing unverified or false AI-generated claims will be treated as research misconduct and grounds for rejection.

6.3 Investigation

InScriptum reserves the right to investigate suspected violations of this policy. Investigations may include:

  • Comparison of submitted manuscripts with AI tool capabilities;
  • Verification of all citations and data claims;
  • Author interviews regarding the creation and revision process.

Confirmed violations may result in:

  • Desk rejection;
  • Retraction of published work;
  • Notification to the author’s institution;
  • Report to research integrity authorities.

 

  1. Transparency in Practice: Sample Scenario

An author used ChatGPT to brainstorm research questions, organise a literature review and improve sentence clarity in the introduction and results discussion.

Required disclosure statement:

Declaration of AI Tool Use: In preparing this manuscript, the author(s) used OpenAI’s ChatGPT-4 (accessed November 2025) to support: (1) brainstorming and organising research questions; (2) categorising and summarising literature for the introduction section; and (3) improving clarity and sentence structure in the conclusions section. All AI-generated text was substantially revised and integrated into the author’s own analysis and interpretation. The literature summary was verified against the cited sources, and research conclusions were independently derived by the author(s). The author(s) take full responsibility for the accuracy and originality of this work.

 

  1. Policy Evolution and Exceptions

8.1 Policy Evolution

This Policy will be reviewed annually and updated to reflect:

  • Developments in AI technology;
  • Emerging best practices in scholarly publishing;
  • Research evidence regarding AI in academic contexts;
  • Feedback from authors, reviewers, and editors.

8.2 Case-Specific Exceptions

Requests for exceptions to this policy may be submitted to the Editor(s)-in-Chief for consideration. Such requests must:

  • Explain the specific circumstances warranting an exception;
  • Justify why the proposed AI use aligns with InScriptum‘s core principles
  • Demonstrate that the exception will not compromise research integrity or author confidentiality;
  • Receive written approval before manuscript submission.

 

  1. References and Related Resources

Key sources informing this Policy include but are not limited to:

  • Elsevier Generative AI Policies for Journals (2025);
  • APA Journals Policy on Generative AI (2025);
  • Resnik, D.B., & Hosseini, M. “Disclosing Artificial Intelligence Use in Scientific Research”. Accountability in Research (2025);
  • Taylor & Francis AI Policy (2024);
  • Committee on Publication Ethics (COPE) Position Statement on Artificial Intelligence in Research Publications (2023);
  • International Committee of Medical Journal Editors (ICMJE) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (2023);
  • World Association of Medical Editors (WAME) Recommendations on Chatbots and Generative AI in Relation to Scholarly Publications (2023).

 

  1. Contact and Questions

For questions or clarifications regarding this policy, authors and reviewers should contact the Editorial Team. Section Editors seeking policy guidance should consult with Editor(s)-in-Chief.

 

 

Appendix: Submission Checklist for Authors Using AI

  •  I have used AI tools as described in Section 3.1 and not in prohibited ways (Section 3.2).
  •  I have substantially revised all AI-generated content to reflect my own analysis and interpretation.
  •  I have verified the accuracy of all citations, data and factual claims.
  •  I have reviewed all images and figures for accuracy and integrity.
  •  I have reviewed my AI tool provider’s terms of service to ensure data protection.
  •  I have disclosed my AI tool use in a separate statement.
  •  I understand that I bear full responsibility for all content in this manuscript.

 

 

Declaration of AI Tool Use for the development of InScriptum AI Policy:

In preparing this Policy, the Editor(s) used the Perplexity AI Platform (operated by Perplexity AI, Inc.; accessed November 2025) to (1) identify current trends in the use of AI in scholarly publishing; and (2) locate existing AI policies. These were reviewed and analysed, and on their basis, the final AI Policy was drafted.