Vol. XXXV Issue 2
December 2024
ISSN online version: 1852-6233
Esta obra está bajo una Licencia Creative Commons Atribución-NoComercial-CompartirIgual 4.0 Internacional.
Note from the General Editor
Nota del Editor General
In the last years, Artificial Intelligence (AI) has revolutionized the methodologies of scientific research and the writing of scientific papers, as well as the editorial management of scientific journals and academic books. This technology -which allows computers to simulate the human intelligence and capacities for problem resolution- involves the development of algorithms modelled from the processes of decision making of the human brain that can “learn” from the available data to make classifications and predictions.
The benefits of using AI are clear, among them, the increasing efficiency of the processes of investigation, analysis and interpretation of results, and decision making. In the international scientific community, notwithstanding, concerns have been raised on the use of Large Language Models (LLMs) -such as ChatGPT, Google Bard, or Bing– in the activities that are their own. These concerns are principally centered in ethical aspects, and in the authenticity and integrity of the scientific publications due to the possible fraudulent or malicious use of the tools. In fact, challenges have been posed to avoid the damages that can be derived from the use of AI such as reinforcement of biases, lack of data privacy (particularly important in research with human beings), perpetuation of inaccuracies, and the potential to reduce critical thinking due to overconfidence in the tools. The reduction in critical thinking, particularly among the youngest researchers who are more prone to use AI due to their familiarization with technologies, might have undesirable consequences on the advancement of the scientific knowledge and its applications.
For the previously mentioned, the need has arisen to develop guides or protocols for the use of AI in scientific research and writing, and in editorial management to insure an ethical and responsible application. In relation to this point, minimum consensus have been reached by the international community, namely: (a) AI used for the development of a work or the elaboration of a manuscript cannot be cited as an author because only human beings can be responsible for the content of a scientific article; (b) not being legal entities, AI models cannot make statements regarding conflicts of interest or manage copyrights or licenses of use; (c) editors and reviewers are responsible for the evaluations, opinions, and decisions on the manuscripts handled by them. In short, the various actors of the scientific system are fully responsible of their actions -including those performed with AI- and, thus, of any unethical breach.
It is fundamental, then, that users of AI choose the tools according to the benefits that could be obtained from them, but in full knowledge of the limitations and possible damages that might be derived from their improper use. Likewise, it is a must that the international community continue the work on the development of guides and protocols to insure the responsible and ethical use of AI in the scientific field.
Elsa L. Camadro
Esta obra está bajo una Licencia Creative Commons Atribución-NoComercial-CompartirIgual 4.0 Internacional.