AI Policy
Introduction
HUMANIOLA recognizes the potential of Artificial Intelligence (AI) tools to enhance the research and writing process. Our policy aims to guide authors, reviewers, and editors on the responsible and ethical use of these technologies while upholding the highest standards of scholarly integrity and protecting intellectual property. This policy will evolve as the AI field progresses.
The Editors will evaluate each specific case to avoid any misuse of AI.
General Principles
Transparency: This policy emphasizes transparency in using AI, ensuring that readers are aware when AI tools have contributed to a work.
Integrity: The policy upholds the integrity of scholarly research and publication, placing accountability squarely on the human authors, reviewers, and editors.
Data Security and Copyright Protection: Authors and editors must ensure that any AI tool used respects high data security, confidentiality, and copyright protection standards.
Adaptability: This policy will be updated as AI technology and research ethics guidelines evolve.
This policy seeks to enable the beneficial use of AI while protecting the scholarly record and the rights of authors and contributors. By adhering to these guidelines, HUMANIOLA will ensure the ongoing trust and integrity of the publications.
For Authors
Permitted Uses of AI: Authors may use generative AI tools for idea generation, language enhancement, literature classification, and coding assistance. AI-assisted writing for improving language, grammar, or structure is permitted without disclosure, but authors remain responsible for the accuracy of their work.
Required Disclosure: Any use of generative AI tools in the writing process, including text, images, or other content generation, must be clearly disclosed. This disclosure should include the name of the tool used (with version number), how it was used, and the reason for its use. The disclosure must be included in the Methods or Acknowledgments section for article submissions.
Accountability and Responsibility: Authors are responsible for their submissions' originality, validity, and integrity. They are accountable for the content, including ensuring accuracy, correcting any errors or biases introduced by AI, and checking for plagiarism. The use of AI tools should not replace core researcher and author responsibilities.
Prohibited Uses of AI: AI tools should not be listed as an author because they cannot assume responsibility for the submitted content, manage copyright or enter into publishing agreements. Authors should not use generative AI to create or alter images, figures, or original research data, except if the AI is part of the research design, in which case its use must be clearly described. Specifically, this includes enhancing, obscuring, moving, removing, or introducing any feature within an image.
Content Integrity: Authors must ensure that AI-generated content is accurate, factual, and free of bias or falsities. Authors must verify all claims and citations.
For Reviewers
Confidentiality: Reviewers must not upload unpublished manuscripts or any part of them into generative AI tools due to confidentiality and proprietary rights risks. This includes all associated files, images, and information. Review reports also must not be uploaded in generative AI tools.
Original Assessment: Generative AI tools should not be used to analyze or summarize submitted articles. Peer review requires human critical thinking and original assessment.
Language Assistance: AI may be used to improve the language of the review, but the reviewer remains responsible for the accuracy and integrity of the review.
Consequences of Violation: Reviewers found using generative AI to generate review reports inappropriately will not be invited to review for the journal.
For Editors
Confidentiality: Editors must not upload unpublished manuscripts or any associated materials into generative AI tools. All communication about the manuscript must remain confidential.
Human Oversight: Editors maintain overall responsibility for the content published in the journal. Generative AI should not be used to evaluate or make decisions about manuscripts, as these tasks require critical human judgment.
Permitted Uses: Editors may use AI tools to look for suitable peer reviewers.
Consequences of Violation: Editors must not use generative AI to generate decision letters, or summaries of unpublished research. If an editor suspects an author or reviewer has violated the AI policies, they should inform the publisher.
Editorial Investigation: The journal and publisher will lead joint investigations into concerns raised about the inappropriate or undisclosed use of generative AI in published articles, following guidance from COPE and internal policies.



