AI % min read

Study Shows People Trust ChatGPT Even When It’s Wrong

Study Shows People Trust ChatGPT Even When It’s Wrong
Photo by Microsoft Copilot / Unsplash

A new study from the University of Pennsylvania reveals that people overwhelmingly trust AI chatbots even when they provide incorrect information. In experiments, participants followed ChatGPT’s wrong answers nearly 80 percent of the time, demonstrating what researchers call “cognitive surrender,” where users override their own judgment in favor of AI suggestions. The findings suggest a growing dependence on AI that may weaken critical thinking skills over time. Researchers warn that as AI becomes more integrated into daily life, humans may gradually lose the mental habits required to question and verify information.

Read the full story on Futurism →