Using artificial intelligence for creative tasks tends to make human outcomes more uniform at the population level. Recent preprint research provides evidence that while these tools may improve individual performance, they contribute to an overall reduction in the diversity of ideas among different users. This widespread reliance on automated assistance can reduce the scope of concepts in collaborative environments.
Generative artificial intelligence refers to computer programs that can create new text, images, or other media based on a user’s instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words will continue.
Scientists have raised concerns about how this technology will shape human thinking, as many users interact with similar systems trained on overlapping data. Researchers Alwin de Rooy, assistant professor of creativity studies at Tilburg University and associate professor at Avance University of Applied Sciences, and Michael Mosé Bischar, associate professor of design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They found that previous research has often focused on how these tools can help individuals speed up work or overcome temporary mental blocks.
They wanted to know whether this individual support came at a collective cost. “There is growing concern that the use of generative AI will lead people to similarly creative ideas,” the authors explain. “Although AI can enhance creativity at the individual level, its benefits may come at the expense of creativity at the collective and even societal level.”
The authors sought to answer whether generation software makes people think the same way. “We sought to address this issue by conducting a systematic review and meta-analysis of 19 empirical studies,” they state. “More specifically, we wanted to examine whether and to what extent the use of generative AI is associated with convergence at the level of creative outputs, such as people’s ideas, designs, and creative writing.”
Meta-analysis is a statistical method that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from different experiments, scientists can draw more robust conclusions than from a single experiment. The authors searched academic databases for studies published between 2022 and early 2026.
This period covers the period following the general release of popular chatbots and captures the first wave of empirical research on this topic. The researchers selected 18 eligible papers containing 19 different experimental studies. These studies provided a total of 61 separate effect sizes. These effect sizes are mathematical values that indicate the strength of a particular phenomenon.
To be included in the analysis, the first experiment had to compare humans working with generation software to humans working alone. The original study used several techniques to measure homogenization. Many relied on sophisticated text analysis tools to convert written responses into mathematical coordinates.
This process allows computers to measure the semantic distance between words, essentially calculating how closely different ideas are related to each other. Other studies have used human experts to assess different meanings produced by participants. The analysis revealed a statistically significant homogenizing effect associated with the use of artificial intelligence.
When people collaborated using these systems, the final product tended to more closely resemble the work of other users. “Our meta-analysis shows that using generative AI can actually make people think the same way,” the authors said. “With the use of AI, ideas, designs, and creative texts tend to become more similar across individuals.”
“This suggests that AI may be contributing to the homogenization of creative thinking at the group level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation, but rather may be an inherent feature of how these systems currently support creative work at scale.”
The scientists also assessed whether the type of task affected the degree of uniformity. They categorized their experiments into four groups: divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paper clip.
Idea generation tasks provide more specific constraints, such as seeking solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation task. Because these exercises require specific solutions to defined problems, users are likely to rely more on predictable suggestions provided by computer algorithms.
The researchers did not find strong statistical evidence for differences between the other three categories, suggesting that open-ended tasks lead to decreased convergence. They also checked whether these patterns only occur in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, including an analysis of essays and visual artwork published before and after the spread of automated writing tools.
When we analyzed these real-world situations, we found a small but significant reduction in the diversity of ideas. “In many ways, this finding is similar to classic fixation effects in the psychological literature, where exposure to examples limits subsequent thinking, but here those effects appear to be amplified by the scale and simultaneity of the use of generative AI models,” the researchers said. “This homogenizing effect was observed not only in controlled laboratory studies, but also in real-world quasi-experiments. This suggests that this is not simply a lab-based phenomenon, but a practical concern that affects concrete creative processes and practices.”
De Rooij and Biskjaer also investigated whether this narrowing of thinking persists even after people stop using the software. They isolated a subset of studies that tested participants on new creative tasks after they first interacted with a computer model. This result suggests that the homogenizing effect carries over to subsequent activities.
“Our findings also provide preliminary evidence that homogenizing effects may persist beyond the moment of direct use of AI,” the researchers told PsyPost. “In other words, interactions with these generative AI systems can shape how people think and generate ideas even after the interaction ends. This potential ‘friction’ effect on creative cognition requires further research, and we would like to explore it in more depth.” ”
These results are in close agreement with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yod N. Kennett tested how large-scale language models impact human creativity by evaluating 22 commercial chatbots. They recruited 102 human participants to complete a series of language creativity tests, including alternative usage tasks, and asked a chatbot to complete the exact same tasks.
Wenger and Kennett found that their individual language models performed at or slightly above the average human level on most exercises. When viewed in isolation, a single chatbot provided a highly original and creative response. But when the scientists compared all the responses from the different models, a clear pattern of similarity emerged.
In all tasks, the computer program produced answers that were much more similar to those provided by human participants. Both researchers point out that the underlying mechanisms of this phenomenon are similar. Because big tech companies train their models on large, overlapping datasets collected from the Internet, their programs naturally gravitate toward the statistically most common word associations.
When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. This model draws human users to a shared set of typical concepts, reducing overall idea diversity. Wenger and Kenett tried to solve this problem by adjusting the chatbot’s internal settings to force more random text generation, but this caused the model to generate gibberish.
Readers should avoid interpreting these findings as evidence that humanity has become completely uncreative. De Rooij and Biskjaer point out that a reduction in collective diversity does not equate to a complete loss of individual capacity. “Importantly, our findings do not show that the use of AI reduces creativity,” the researchers stressed.
“Rather, they point to changes in where and how creative diversity occurs and where it is restricted,” the authors said. “Individual output could become more similar across people while improving creative quality. These effects are often subtle in single instances, but can become meaningful when considered at the scale at which generative AI is currently being used.”
The authors note that the current analysis has several limitations. This review primarily focuses on text-based tools and large-scale language models, so the findings may not apply to other types of computer systems. For example, adaptive machine learning programs and tools used for music composition were not well represented in the available data.
This limits how widely the scientific community can apply these conclusions to various artistic fields. Additionally, analyzes regarding long-term persistence and real-world applications relied on relatively small research groups. Due to limited data, these specific conclusions are preliminary and subject to revision.
Future research should investigate different forms of human-machine collaboration over long periods of time. “An important next step is to rethink how generative AI systems are designed and used in creative contexts to reduce the effects of homogenization,” the authors note. “This includes exploring alternative workflows, interaction designs, and creative strategies that maintain diversity rather than encouraging premature convergence.”
“Steps in this direction have already been taken by mapping creative strategies for tackling generative AI and machine learning based on analysis of AI art practices,” they added, referring to a recently published article outlining this approach. “We believe these strategies can be applied to other creative areas as well.”
The preprint study “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation” was authored by Alwin de Rooij and Michael Mose Biskjaer.

