New Delhi, June 29 (SocialNews.XYZ) People appear to find tweets written by artificial intelligence (AI) language models more convincing than those created by humans, a new study has shown.
According to the study published in Science Advances, disinformation generated by AI may be more convincing than disinformation written by humans.
To achieve the goals, the researchers asked OpenAI’s model GPT-3 to write tweets containing informative or disinformative texts on a range of different topics, including vaccines, 5G technology and Covid-19, or the theory of evolution, among others, which are commonly subject to disinformation and public misconception.
They collected a set of real tweets written by users on the same topics and programmed a survey.
The researchers then recruited 697 people to take an online quiz that determined whether tweets were generated by AI or collected from Twitter and whether they were accurate or contained misinformation.
They discovered that participants were three per cent less likely to believe human-written false tweets than AI-written ones.
According to Giovanni Spitale, the researcher at the Switzerland-based University of Zurich who led the study, the researchers are unsure why people are more likely to believe tweets written by AI, but the way GPT-3 orders information could play a role.
Moreover, the study said that the content written by GPT-3 was "indistinguishable" from organic content.
People polled couldn't tell the difference, and one of the study's limitations is that the researchers cannot be 100 per cent certain that the tweets gathered from social media were not written with the assistance of apps like ChatGPT.
Participants were the most effective at identifying misinformation written by real Twitter users, however, GPT-3-generated tweets with false information deceived survey participants slightly more effectively, the study found.
Further, the researchers predicted that advanced AI text generators such as GPT-3 could have the potential to greatly affect the dissemination of information, both positively and negatively.
"As demonstrated by our results, large language models currently available can already produce text that is indistinguishable from the organic text; therefore, the emergence of more powerful large language models and their impact should be monitored," the researchers stated.
Source: IANS
Gopi Adusumilli is a Programmer. He is the editor of SocialNews.XYZ and President of AGK Fire Inc.
He enjoys designing websites, developing mobile applications and publishing news articles on current events from various authenticated news sources.
When it comes to writing he likes to write about current world politics and Indian Movies. His future plans include developing SocialNews.XYZ into a News website that has no bias or judgment towards any.
He can be reached at gopi@socialnews.xyz
This website uses cookies.