Humans Outdo AI in Divergent Thinking

Source:

Nature
on
September 14, 2023
Curated on

April 24, 2024

In a recent comparative study, human creativity was pitted against artificial intelligence systems, including ChatGPT3.5, ChatGPT4, and Copy.Ai, using the Alternate Uses Task (AUT) method. A preregistered experiment with native English speakers tested their ability to come up with original and creative uses for common objects. On the AI side, each chatbot generated responses to four object prompts across multiple sessions to simulate a human participant in the study. The experiment pointed towards a significant difference between human and AI performances. While humans participated once, providing free-form answers with an emphasis on quality, AIs underwent multiple sessions restricted by idea quantity and response length. Human responses typically ranged from one to three words, such as 'cat playhouse' when prompted with a box. The AI's responses, while sometimes requiring guidelines adjustments, followed a similar pattern. However, piloting showed that AI systems had tendencies to repeat answers across sessions, prompting a cap on the number of sessions to maintain diversity in responses. To quantify originality, researchers operationalized responses' 'semantic distance' and subjected them to analysis on the SemDis platform. The study's instructions for both humans and AIs aimed to foster creative quality rather than quantity. 'Semantic distance' was used to measure the originality of responses, and subjective creativity ratings by panels of human judges further augmented the evaluation. Results showed that in creative, divergent thinking tasks, humans maintained a distinctive edge over their AI counterparts.

Ready to Transform Your Organization?

Take the first step toward harnessing the power of AI for your organization. Get in touch with our experts, and let's embark on a transformative journey together.

Contact Us today