Artificial Intelligence Policy
The journal Systems and Computing (SyCom) recognizes that Artificial Intelligence (AI) technologies are increasingly used in research and academic writing. While these tools may support certain tasks, their use must remain transparent, responsible, and consistent with the principles of academic integrity and research ethics.
This policy defines the acceptable and unacceptable use of AI tools during the writing, submission, review, and editorial processes.
1. AI Use by Authors
1.1 Permitted Uses
Authors may use AI tools for limited technical assistance such as:
-
Language editing and grammar correction
-
Improving readability or clarity of the manuscript
-
Formatting assistance
Such use must not affect the originality or scientific content of the work.
1.2 Prohibited Uses
AI tools must not be used to:
-
Generate scientific results, data, figures, or tables without proper verification
-
Produce the core intellectual content of the manuscript
-
Fabricate references, citations, or experimental results
-
Replace the authors’ scientific reasoning or interpretation
Authors remain fully responsible for the accuracy, originality, and integrity of their manuscript.
1.3 Disclosure Requirement
If AI tools are used during manuscript preparation, authors must clearly disclose this use in a dedicated statement within the manuscript.
Example disclosure:
“The authors used an AI-based language editing tool to improve grammar and readability. The authors reviewed and approved all content and remain fully responsible for the manuscript.”
AI tools cannot be listed as authors because they cannot take responsibility for the work.
2. AI Use by Reviewers
2.1 Confidentiality Requirement
Manuscripts under peer review are confidential documents.
Reviewers must not upload, share, or disclose manuscripts to any AI system, automated tool, or external service.
This includes:
-
AI chatbots
-
Automated summarization tools
-
AI-based reviewing systems
The use of such systems may violate author confidentiality and copyright.
2.2 Human-Generated Peer Review
Peer-review reports must be entirely generated by the reviewer.
AI tools must not be used to:
-
Generate review reports
-
Analyze or summarize manuscripts
-
Produce comments or recommendations for editorial decisions
3. AI Use by Editors
Editors must ensure that editorial decisions are based on:
-
Independent human evaluation
-
Expert peer review
-
Scientific merit and ethical standards
AI tools may be used for technical support, such as:
-
plagiarism detection
-
language checking
-
administrative assistance
However, AI systems must not replace editorial judgment or decision-making.
4. Policy Violations
If a violation of this AI policy is suspected, the journal may take appropriate actions, including:
-
Requesting clarification from authors or reviewers
-
Rejecting the manuscript
-
Retracting a published article
-
Removing reviewers from the reviewer database
The journal reserves the right to investigate potential breaches in accordance with international publication ethics standards.