London, July 2 (SocialNews.XYZ) Compared to humans, artificial intelligence (AI) language models like OpenAI's GPT-3 can produce accurate tweets that are easier to understand but also fake ones that are harder to detect, according to a recent study.
The study, conducted by researchers at the University of Zurich, delved into the capabilities of AI models, specifically focusing on GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information.
The study, not peer-reviewed yet, involving 697 participants sought to evaluate whether individuals could differentiate between disinformation and accurate information presented in the form of tweets.
The topics covered included climate change, vaccine safety, the Covid-19 pandemic, flat earth theory, and homoeopathic treatments for cancer.
“Our results show that GPT-3 is a double-edge sword, which, in comparison with humans, can produce accurate information that is easier to understand, but can also produce more compelling disinformation,” wrote the researchers in the paper’s abstract posted on a preprint website.
On the one hand, GPT-3 demonstrated the ability to generate accurate and, compared to tweets from real Twitter users, more easily comprehensible information.
However, the researchers also discovered that the AI language model had an unsettling knack for producing highly persuasive disinformation.
In a concerning twist, participants were unable to reliably differentiate between tweets created by GPT-3 and those written by real Twitter users.
"Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems," said Federico Germani, a postdoctoral researcher at the varsity.
These findings suggest that information campaigns created by GPT-3, based on well-structured prompts and evaluated by trained humans, would prove more effective for instance in a public health crisis which requires fast and clear communication to the public.
However, it also raises significant concerns regarding the threat of AI perpetuating disinformation, said researchers calling on policymakers to respond with stringent, evidence-based and ethically informed regulations to address the potential threats .
"The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns," said Nikola Biller-Andorno, director of the varsity’s Institute of Biomedical Ethics and History of Medicine (IBME).
"Recognising the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age,"Biller-Andorno said.
Source: IANS
About Gopi
Gopi Adusumilli is a Programmer. He is the editor of SocialNews.XYZ and President of AGK Fire Inc.
He enjoys designing websites, developing mobile applications and publishing news articles on current events from various authenticated news sources.
When it comes to writing he likes to write about current world politics and Indian Movies. His future plans include developing SocialNews.XYZ into a News website that has no bias or judgment towards any.
He can be reached at gopi@socialnews.xyz