Published October 15, 2025 | https://doi.org/10.59350/370zz-c7b08

Why AI transparency is not enough

Creators & Contributors

Generative AI (genAI) is now ubiquitous in research. The field of academic publishing is struggling with an overwhelming mass of genAI slop and where to draw the line between acceptable and unacceptable genAI use. GenAI not only offers new opportunities, as is widely touted, but also creates numerous new problems and challenges, not least for responsible research. What does it mean to use genAI responsibly? Is it even possible? And what does it take to do so? 

In aprevious post, the authors of a new taxonomy, called GAIDeT (Generative Artificial Intelligence Delegation Taxonomy), argue that genAI can be used responsibly in research, namely by being transparent about its use. The taxonomy is modelled on the CRediT taxonomy, which specifies contributor roles, and designed to document which tasks have been delegated to genAI: from idea generation to selection of research methods, data collection and analysis, and quality assessment. An automated text generator, theGAIDeT Declaration Generator, offers a checklist and produces a standardised disclosure text. This approach is intended to support responsible genAI use "without burdening researchers", as the authors emphasise. We have several concerns about this approach and will articulate some of them in this post.

Transparency is undoubtedly an important part of good research practice, but addressing transparency alone does not answer the more general question of responsibility. Transparency concerns not only the use of genAI, but more importantly transparency issues inherent to genAI itself. Proprietary closed source models like ChatGPT are a black box: we do not know and cannot influence how these models work on the inside.

When it comes to responsible AI use, there are, at best, many conflicting positions. Equating responsibility with disclosing genAI use conceals, in fact, several fundamental questions concerning the responsibilities that academics have to their disciplines, each other, and society at large. A mere declaration risks becoming an as if ethics, cloaking deeper ethical questions under "responsible use" while revealing nothing about inherent aspects of intransparency of these technologies, not to mention biases, exploitation, intellectual theft, environmental and societal costs, etc. Language plays a key role here. The way in which the GAIDeT approach is framed exemplifies how approaches like this contribute to the individualisation of problems that are in fact systemic.

Transparent opacity

How does supposed transparency create opacity? First of all, it does so by equating standardised disclosure with taking responsibility. The authors state that transparency concerning the use of genAI in academia matters to those involved in publishing processes, but they do not elaborate on what people involved in these processes can or should do with that information. The assumption then is that by disclosing the use at several steps, researchers are "signalling" responsibility, and editors, reviewers, and repository curators will be able to assess "legitimate" versus "illegitimate" uses of genAI in research processes.

Fig 1 gen AI
Screenshot of the tasks covered by the GAIDeT taxonomy

This is a problematic claim as it conceals the question of whether genAI use in research is or can be responsible at all. It presumes a level of discussion that has not yet been reached, thereby ultimately legitimising genAI use across the board and effectively closing the discussion. According to the authors, the taxonomy allows researchers to be open about "their AI-based tools and assistants," similar to a disclosure of conflict of interest or funding. Yet, the comparison does not stand. The delegation of tasks to "tools and assistants" constitutes a methodological decision and must be addressed as such. Researchers should therefore be required to explain why they are trusting a black box that is neither open nor fair. Disclosing the use of genAI through a taxonomy avoids addressing pressing questions like this.

What is concealed

Calling genAI a "tool" has become the standard way of referring to what is, in fact, a political technology. Tools are seen as neutral and unaccountable. This notion rests on several false assumptions: tools are indispensable; tools don't have human flaws; technology equals progress and improvement. Indeed, a calculator spits out fault-free calculations, but genAI is not "a calculator for words", as OpenAI's Sam Altman quipped. There's no consensus on what it actually is, but it's certainly more than a bit of software. Dan McQuillan has suggested to call AI an "apparatus," since it consists of several layers of technology, institutions, and ideology (McQuillan, Resisting AI, 3). Kate Crawford urges us to see AI as "embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, classifications" (Atlas of AI, 8).

Whose burden?

One of the justifications given for GAIDeT is that it does not burden researchers. However, by outsourcing brainstorming, drafting, writing, reading, translating, copyediting, etc., we deliberately deskill ourselves, harm our metacognitive abilities, and devalue essential competencies of academic research. We must ask ourselves what are the incentives for doing so. We recognise that academia is stretched of resources such as funding and time. Our tasks grow while budgets are shortened. Researchers for whom English is not their mother tongue struggle to compete in a system where English is the lingua franca. But this only shows that researchers are in a vulnerable position and that the genAI industry can and does exploit it. GenAI is not empowering researchers but cementing their precarious roles.

While these issues are systemic and demand discussions on how we organise academia, we should be careful not to understand responsibility as part of increasing overtasking, nor to accept the individualisation of responsibility while a hypercompetitive system increases demands on us. Doing research is hard and takes time, but good research practice is not a burden – it is the foundation of research itself.

Our question is: How responsible is it to use genAI when its long-term effects on society are not yet foreseeable, but we know the most vulnerable are already being harmed? When its use is consolidating the power of an industry that cares little or nothing about the well-being of society or its individual members? Genuinely disclosing transparency would require to admit the acceptance of a multitude of harmful consequences: data theft, large-scale rare-minerals extraction and water consumption, damage to the physical and mental health of people worldwide, especially in poor countries, exploitation of people's labour, and the amplification of cultural and social biases and discrimination.

What we need are more conversations about how genAI is being pushed everywhere, who is benefitting, who is being harmed, what are the long-term consequences, and what the use of genAI is compensating for. As researchers, the work of many of us is supported by society, and we therefore want to give something back. That means acting responsibly. It is not a task that can or should be automated. This is precisely why we need to think carefully about how we do our work and what means we want – and do not want – to use for that.

Suggested reading list


Disclaimer: this post was written without the use of genAI.
Header image: A.Rahmat MN on Unsplash.

Additional details

Description

Generative AI (genAI) is now ubiquitous in research. The field of academic publishing is struggling with an overwhelming mass of genAI slop and where to draw the line between acceptable and unacceptable genAI use. GenAI not only offers new opportunities, as is widely touted, but also creates numerous new problems and challenges, not least for responsible research. What does it mean to use genAI responsibly? Is it even possible?

Identifiers

UUID
3ff0b1d1-1f9b-4402-a7ec-e41930e94f80
GUID
https://www.leidenmadtrics.nl/articles/why-ai-transparency-is-not-enough
URL
https://www.leidenmadtrics.nl/articles/why-ai-transparency-is-not-enough

Dates

Issued
2025-10-15T08:45:00
Updated
2025-10-15T08:55:53