Sunday, October 6

GPT-fabricated clinical documents on Google Scholar

Skip to primary material

Peer Reviewed

Short article Metrics

https://misinforeview.hks.harvard.edu/wp-content/themes/ristretto/img/crossref-logo.png” alt=”CrossRef” height=”30″ width=”29″/> < img src="https://misinforeview.hks.harvard.edu/wp-content/themes/ristretto/img/crossref-logo.png"alt ="CrossRef"height ="30"width ="29"/ >

CrossRef Citations

Academic journals, archives, and repositories are seeing an increasing variety of doubtful research study documents plainly produced utilizing generative AI. They are typically produced with extensively readily available, general-purpose AI applications, more than likely ChatGPT, and simulate clinical writing. Google Scholar quickly finds and notes these doubtful documents along with trusted, quality-controlled research study. Our analysis of a choice of doubtful GPT-fabricated clinical documents discovered in Google Scholar reveals that numerous have to do with used, frequently questionable subjects prone to disinformation: the environment, health, and computing. The resulting boosted capacity for harmful control of society’s proof base, especially in politically dissentious domains, is a growing issue.

By

Jutta Haider

Swedish School of Library and Information Science, University of Borås, Sweden

Björn Ekström

Swedish School of Library and Information Science, University of Borås, Sweden

Malte Rödl

Department of Environmental Communication, Swedish University of Agricultural Sciences, Sweden

image by viarami on pixabay Research Questions

  • Where are doubtful publications produced with generative pre-trained transformers (GPTs) that can be discovered by means of Google Scholar released or transferred?
  • What are the primary qualities of these publications in relation to primary subject classifications?
  • How are these publications spread out in the research study facilities for academic interaction?
  • How is the function of the academic interaction facilities challenged in keeping public rely on science and proof through unsuitable usage of generative AI?

research study note Summary

  • A sample of clinical documents with indications of GPT-use discovered on Google Scholar was recovered, downloaded, and evaluated utilizing a mix of qualitative coding and detailed data. All documents consisted of a minimum of one of 2 typical expressions returned by conversational representatives that utilize big language designs (LLM) like OpenAI’s ChatGPT. Google Search was then utilized to identify the level to which copies of doubtful, GPT-fabricated documents were readily available in numerous repositories, archives, citation databases, and social networks platforms.
  • Approximately two-thirds of the recovered documents were discovered to have actually been produced, a minimum of in part, through concealed, possibly misleading usage of GPT. The bulk (57%) of these doubtful documents handled policy-relevant topics (i.e., environment, health, computing), vulnerable to affect operations. The majority of were offered in numerous copies on various domains (e.g., social networks, archives, and repositories).
  • 2 primary dangers develop from the progressively typical usage of GPT to (mass-)produce phony, clinical publications. The abundance of produced “research studies” permeating into all locations of the research study facilities threatens to overwhelm the academic interaction system and threaten the stability of the clinical record. A 2nd threat depends on the increased possibility that convincingly scientific-looking material remained in truth deceitfully developed with AI tools and is likewise enhanced to be obtained by openly offered scholastic online search engine, especially Google Scholar. Little, this possibility and awareness of it runs the risk of weakening the basis for trust in clinical understanding and presents major social threats.

» …
Find out more