Recent research provides evidence that artificial intelligence can successfully generate customized images designed to evoke specific emotions in humans. The findings suggest that these computer-generated photographs function similarly to traditional photographs, while offering the added benefit of being adaptable to different cultures, ages, and genders. The study was published in the journal Advances in the methods and practices of psychological science.
Generative AI refers to computer systems that can create new content, such as text or images, based on simple written instructions. Scientists often use collections of photographs to study human emotions, known as emotion induction.
By showing participants specific images, researchers can reliably evoke emotions such as fear, joy, and disgust in a laboratory setting. This allows scientists to study how emotions influence human behavior and decision-making.
However, older image collections are starting to show their age. Traditional photos often feature low resolution or outdated fashion styles, which can distract participants from the intended emotion. Additionally, existing image databases tend to lack cultural diversity. Most traditional photography features Western backgrounds and people, which may have limited effectiveness for individuals from other parts of the world.
A large international team of 46 scientists joined forces to solve these problems. They wanted to see if generative AI could create a more flexible and up-to-date collection of emotional images.
“Two reasons actually inspired me to explore this topic. First, in affective science, we have been working with image sets for many years that are valuable, but also come with obvious limitations, such as diversity, flexibility, cultural suitability, and the ability to update or expand image sets to suit new research questions,” said corresponding author Maciej Behnke, associate professor of psychology at Adam Mickiewicz University.
“Secondly, this project was also born out of a personal step in my academic career. After securing tenure in Poland, I decided to take an intellectual step back and start pursuing a bachelor’s degree in computer science because I wanted to understand AI more deeply, rather than just following it from a distance. Over time, it became natural to combine these two paths: my background in affective science and my growing interest in artificial intelligence.”
Scientists wrote detailed descriptions of existing emotional photos in ChatGPT-4o and fed those descriptions into image generation tools like Midjourney and Freepik to create new photos. They created 847 different images designed to evoke 12 specific emotional states, including amusement, awe, anger, attachment, craving, disgust, excitement, fear, joy, neutrality, nurturing love, and sadness.
The research team didn’t rely solely on computer programs. They employed a collaborative process in which local cultural experts reviewed images and asked the AI to make specific adjustments.
This allowed the team to adapt the base image to different demographic groups. They created specific versions of the photo that reflected six broad cultural regions, including Asian, African, Arabic, Indian, Latin American, and Western contexts.
We also wanted to allow different types of people to appear in the images. Scientists have generated matching variations that change the gender and age of the people in the photos without changing the overall emotional scene.
To test how well the photos worked, researchers recruited 2,470 participants from 58 countries. Participants were divided into six separate experiments. During the experiment, participants observed both traditional photos and newly generated photos. Each image is displayed on the screen for 4 seconds.
After viewing the pictures, participants rated how strongly they felt different emotions on a scale of 1 to 7. They also rated whether the image made them feel positive or negative, and whether it made them feel calm or uplifting.
Scientists have discovered that computer-generated images are just as effective at evoking emotional responses as traditional photographs. For positive emotions such as amusement and awe, the AI photos often evoked even stronger reactions.
When participants viewed images that were tailored to their specific cultural background, they reported slightly stronger emotional reactions compared to when they viewed incongruent images. This suggests that customizing visual materials to suit a person’s culture may improve their functionality.
“Our findings showed that culturally conditioned images produced slightly stronger emotional responses, supporting the idea that people respond more strongly to stimuli that better reflect their own context,” Behnke told PsyPost. “This also has a broader message: We need to stop the habit of presenting the same white-centric, Western-centric images in psychological research everywhere.”
The researchers also found that changing the gender or age of the people in the images did not reduce the emotional impact. This provides evidence that scientists can safely modify demographic details to suit specific research needs.
Scientists also calculated the smallest difference in emotional intensity that a person can actually notice. The researchers found that even when participants rated two images as evoking similar emotional responses, small changes in emotion could be measured mathematically.
Computer-generated images were slightly less effective at evoking negative emotions such as sadness and anger. The researchers noted that safety filters built into AI programs often prevent the creation of mildly graphic or offensive content.
“I think the key message is that AI can help make science better,” Behnke says. “We found that AI-generated images are sufficient to evoke emotional responses at a level comparable to traditional research image sets, and can also be adapted to different cultural contexts without losing their impact. This is important not only for researchers but also for the public, as it shows that AI-generated images are already powerful enough to influence people’s emotions.”
One potential misconception of this study is that the creation of emotional images is a completely automatic process. The researchers note that human supervision is still strictly necessary to ensure that the photographs are psychologically useful and ethically acceptable.
“AI can help generate and refine stimuli, but human creativity is needed to come up with meaningful ideas, and human expertise is needed to determine whether an image is psychologically useful, culturally appropriate, or ethically acceptable,” Behnke said. “Our research actually supports a human-involved model rather than a human-replacement model.”
This study also has some limitations. All experiments were conducted in English, and the broad cultural categories used by the researchers do not fully capture the diversity within any particular world region. The AI program also struggled with certain visual details. Some of the generated faces looked unrealistically attractive, and the software sometimes introduced anatomical errors such as poorly rendered hands.
AI technology advances so rapidly that certain computer programs change on a monthly basis. This rapid evolution means that scientists must constantly update their image collections to keep up with changing standards.
In the future, the researchers hope to explore how AI can generate dynamic videos rather than just still photos. They also plan to investigate ways to personalize emotional images for individual participants to make scientific research even more accurate.
“One of our long-term goals is to understand more broadly how AI can contribute to affective science,” Behnke explained. “In related work, we are already considering whether AI models can predict viewers’ emotional responses to images. This raises a new set of questions about how AI can support emotion research beyond stimulus creation.”
“More broadly, I think AI could help move the field toward personalization. Currently, affective science relies primarily on uniform stimuli, even though what actually makes us feel happy, sad, fearful, or angry varies greatly from person to person. In the future, AI could allow us to personalize stimuli, thereby making emotion elicitation more reliable and meaningful for each participant.”
“I also want to emphasize that this research was only possible because so many people were willing to donate their time, expertise and energy,” Behnke added. “For me, that’s part of a larger lesson: I believe that large-scale team science is the future of affective science. If we want to understand emotions across people and cultures, we need to stop thinking locally and start building more global, collaborative research efforts.”
The study, “Generating emotional images using artificial intelligence: methodology and initial library,” was authored by Maciej Behnke, Maciej Kłoskowski, Michał Klichowski, Wadim Krzyżaniak, Kacper Szymanski, Patryk Maciejewski, Patrykowski, Marjakotowalska, and Rafa Chta. Jonczyk, Jan Novak, Szymon Kupinski, Dominika Kunc, Stanisław Saganowski, Aakash A. Choukasze, Farida Gemas, Kevin S. Kertekian, Ame. Le IMT Mardal, Leonardo A. Aguilar, Barnabas T. Alayande, Vimala Balakrishnan, Dana M. Basnight Basnight Basnight, Jourdan Busse, Tom As. D’Amelio, Jovi C. Dakanay, Abhishek Dede, Shang Gao, Joan FGB Takayanagi, Medical Lomotul Islam, Alvaro Mayhos, Christine M. Mupyangu, Moises Mebarak, Arooj Najumsakubu, Juhi Park, Ekaterin Piltskalav. A, Eli Rice, Sohrab Sami, Yuki Yamada, Jan Baczynski, Liliana Dera, Szymon Jeszko-Biawek, Jakubkowski, Hubert Markowski, Huberczak Nowicki, Bartosz Wilczek, James J. Gross, and Nicholas A. Coles.

