Soft Information, Hard Decisions: AI Advising
Mar 1, 2026
While large language models (LLMs) perform well on well-defined tasks, effective
prompts are challenging when tasks depend on users’ soft traits and latent prefer-
ences. We formalize this friction by introducing preference uncertainty—capturing
soft information—into a cheap talk framework (Crawford and Sobel, 1982) and model soft
information communication with AI as the investor’s optimal stopping problem with
Brownian information flow. We propose a novel empirical methodology to test theoret-
ical predictions via LLM simulations. Compared with human advisors, LLMs are not
subject to misaligned incentives, but soft information communication is inefficient due
to losses from digitization and LLMs’ limited memory. An investor generally prefers
LLMs trained to be more “opinionated” than her own prior, except when she is most
confused and prefers an aligned and equally confused LLM. Our empirical analysis
simulates investor profiles based on the Survey of Consumer Finances and conducts
role-structured LLM advising experiments, benchmarked against standard portfolio
questionnaires, to test model predictions.