The Artificial-Intelligence chatbot, ChatGPT, has written fake research-paper abstracts that have fooled the scientists as they were unable to spot the AI intervention. A research team led by Catherine Gao at Northwestern University in Chicago used ChatGPT for the same purpose to generate artificial research paper abstracts to test whether scientists can spot them.
Fake Research Paper Abstracts Seem Too Good to be True
As per details, the researchers asked the chatbot to create 50 different medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet, and Nature Medicine. They then asked a group of medical researchers to spot the fabricated content.
To everyone’s surprise, the content passed through the plagiarism checker while the AI-output detector spotted 66 percent of the generated abstracts. Meanwhile, the human reviewers were able to identify only 68 percent of the generated abstracts and 86 percent of the genuine abstracts. According to the researchers, journals and medical conferences should adopt new policies that include AI output detectors in the editorial process and clear disclosure of the use of these technologies in order to ensure scientific integrity.
The Misuse of the AI-powered Chatbot
Any technology has two parts, the good part, and the bad part. In the case of ChatGPT, the bad part is taking over as hackers are discussing using ChatGPT in their forms and using it to create malicious codes. “Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations,” warned Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
Also read: Cybercriminals Using ChatGPT to Build Hacking Tools, Code Malware