Academic studies about health issues, the environment and computing are increasingly susceptible to artificial intelligence-driven fabrications, putting the quality of scientific research, such as that which underlies legal arguments, at risk, new research suggests.
A new study published in the Harvard Kennedy School Misinformation Review examined a sampling of scientific papers on Google Scholar with indications of GPT, or generative pre-trained transformer, use. GPT is a collection of AI models developed through OpenAI, a tech research firm.
About 67% of the papers examined were found to have been produced through undisclosed uses of GPT, pointing to potentially deceptive practices, according to the study.
“The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations,” the study states.
Such tainted academic research could result in a decline of public trust in science and evidence-based materials, according to the research paper.
“The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern,” the study states.
Assemblywoman Jacqui Irwin (D-Thousand Oaks), whose legislation frequently deals with technology-related issues, noted that Gov. Gavin Newsom just signed Senate Bill 942, which will mandate developers of generative artificial intelligence (GenAI) – that is, the generation of text, images and videos – to provide tools to detect whether materials were created through AI.
“The rapid deployment of GenAI is requiring many fields, including the legal and scientific professions, to take precautions while understanding how these tools can be beneficial to their work,” Irwin told the Southern California Record in an email.
A bill authored by Irwin that is now on the governor’s desk, Assembly Bill 2013, would provide more transparency on AI training data to professionals who use the technology so that they can detect whether fake scientific papers are influencing AI tools. A bill authored by another lawmaker that didn’t pass the Legislature this year would have called on attorneys to disclose their use of AI, according to Irwin.
“... Other state government bodies are working to ensure GenAI is used appropriately,” she said. “Last January the State Bar published their ‘Practical Guidance’ for using GenAI in the practice of law, and it speaks specifically to the ongoing duty of diligence attorneys have in critically reviewing and validating their work product.”
The bottom line is that attorneys have a duty to go beyond mere detection and removal of false AI-generated results. They need to understand the professional obligations that AI poses, according to the California State Bar’s guidelines. The State Bar has also emphasized that traditional legal research and writing remain critical in the profession.
“I strongly believe AI will be a continued focus for the Legislature in the years to come, with many opportunities to thoughtfully build on foundational efforts like AB 2013, SB 942 and the administrative actions of bodies like the State Bar,” Irwin said.
Thorough research into health effects relating to pharmaceutical drugs and other products is often key to resolving plaintiff class actions against drug companies, and the Harvard University report calls the current scholarly publishing system “largely dysfunctional” and driven by higher education’s “publish or perish paradigm.”
The study warns that fabricated scholarly studies that amount to junk science pose a threat to public confidence in societal norms.
“... As the rise of the so-called anti-vaxx movement during the COVID-19 pandemic and the ongoing obstruction and denial of climate change show, retracting erroneous publications often fuels conspiracies and increases the following of these movements rather than stopping them,” the research says.