Naturally Biased: A Look at How Our Brains Learn Confidence
based on research paper on Nature.com
The sources explore the concept of confidence bias through the lens of a deep neural network model. The model, designed to mimic human decision-making, was trained on visual classification tasks using image datasets like MNIST and CIFAR-10. Surprisingly, despite being optimized for accuracy, the model exhibited common human confidence biases, suggesting these biases stem from a rational adaptation to the statistics of our experiences.
One bias the model replicated was the positive evidence bias, where confidence is higher when more evidence supports the correct choice, even if the signal-to-noise ratio is constant. This was consistently observed across datasets and tasks, including a reinforcement learning scenario where the model learned to opt out of decisions with low confidence. The model also mirrored a more nuanced bias where confidence becomes less accurate at predicting decision accuracy as decision accuracy increases, a phenomenon observed in human studies.
The key insight from the sources is that these seemingly irrational biases can be explained by examining the statistical structure of the data the model learned from. By analyzing the model’s training data, the researchers found that a simple ‘balance-of-evidence’ model, often used to explain confidence, doesn’t hold true. Instead, the model’s confidence was best explained by an ‘ideal observer’ model that considers the complex, often asymmetric, distribution of sensory information in the training data. This suggests our brains learn to optimize confidence based on the statistical regularities of our experiences, even if those don’t always align with simple decision rules.
Furthermore, the sources found that a key factor contributing to the model’s biases was the presence of variable contrast in the training images. When the model was trained on images with fixed contrast, the biases either disappeared or reversed. This highlights the importance of considering the specific statistical properties of the learning environment when trying to understand confidence biases.
The sources go beyond just explaining biases, they also explore the neural underpinnings of confidence. By analyzing the model’s internal representations, the researchers found that decisions and confidence, despite showing behavioral dissociations, are ultimately based on a common internal decision variable. This finding is consistent with neurophysiological studies showing similar patterns in brain regions associated with decision-making.
Interestingly, the model was also able to replicate some neural dissociations observed in human studies. For instance, when the researchers simulated lesions to different layers of the network, mimicking brain damage to specific regions, they observed effects on confidence that mirrored those seen in humans. This suggests that even neural dissociations, often interpreted as evidence against a common decision variable, can arise from a system that fundamentally uses the same information for both decisions and confidence.
The sources present a compelling argument for a ‘naturally biased’ view of confidence, where biases arise not from flawed heuristics but from a rational adaptation to the statistics of our sensory experiences. This perspective offers a fresh way to understand the complex relationship between our perceptions, decisions, and the confidence we have in them.
Comments
Post a Comment