Turns out, if you ask an AI to play an expert, it gets less reliable
A new study from the University of California shows that asking AI models to “act like an expert” can actually reduce factual accuracy. While personas make responses sound more polished and rule‑following, they push models into instruction‑following mode instead of knowledge‑retrieval mode. Researchers developed PRISM, a system that compares answers from both persona and default modes, choosing the better one for each query. Early results show PRISM improves overall performance and could reshape how users prompt AI in the future.