An online survey has revealed that the majority of people can no longer identify AI-generated media as such. This groundbreaking study was first presented at the 45th IEEE Symposium on Security and Privacy in San Francisco.
The study was conducted under the leadership of CISPA faculty Dr. Lea Schönherr and Prof. Dr. Thorsten Holz. Ruhr University Bochum, Leibniz University Hanover and TU Berlin were involved. Around 3,000 people from Germany, China and the USA took part.
AI-generated media as a risk factor
Artificial intelligence (AI) is developing rapidly and offers the possibility of generating masses of images, text and audio files with just a few clicks. This comes with quite a few risks.
“Artificially generated content can be misused in many ways. We have important elections this year, such as the elections to the EU Parliament or the presidential election in the USA: AI-generated media can be used very easily for political opinion making. I see this as a major threat to our democracy,” warns Prof. Dr. Thorsten Holz.
This makes it all the more important to work on the automated recognition of AI-generated media. A task that CISPA faculty member Dr. Lea Schönherr describes as a race against time. AI-generated media is becoming increasingly difficult to recognize using automated methods. It is therefore important to find out to what extent humans can assess this. The study was launched for this purpose.
Text, audio and image: people don't recognize the difference
The cross-media and cross-national study has an astonishing result: it is already very difficult, although not impossible, for people to distinguish between AI-generated media and man-made media. This applies to all types of media - text, audio and images. In all three categories, the participants predominantly classified AI media as created by humans.
“We were surprised that there are very few factors that can be used to explain whether humans are better at recognizing AI-generated media or not. Even across different age groups and with factors such as educational background, political attitudes or media literacy, the differences are not very significant,” explains Holz.
For the online survey, participants were randomly assigned to one of the three media categories (text, audio, image). The study ran between June 2022 and September 2022, and respondents were confronted with 50% real and 50% AI-generated media. The researchers generated the AI files used in the study themselves and took the images from an existing study. The images were photorealistic portraits, news items were chosen as texts and the audio files were excerpts from literature. Socio-biographical data, knowledge of AI-generated media and factors such as media literacy, holistic thinking, general trust, cognitive reflection and political orientation were collected for further categorization. After data cleansing, 2,609 data sets remained to be analyzed.
Further research to follow
The results of the study are of great value for cyber security research. Schönherr emphasizes how AI could be used to personalize phishing emails and other online fraud attempts in the future so that victims no longer recognize the traps as such. The results of the study should help to develop defense mechanisms for such scenarios.
The project also paves the way for further research. A laboratory study in which participants are asked to explain exactly how they recognize whether something is AI-generated or not is already being planned.
Click here for the full study.