Beliefs vs AI
The oft-quoted comment from Thinking, Fast and Slow, referring to the financial services community, reads like:
We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers
It’s a great quote, but it’s not clear to me how the data in the book supports it. It also has very little to do with the argument the book is bringing forward, which is that we have two modes of thinking (i.e. dual-process theory), a fast “quick intuitive” vs a slow “deep thinking”. Dual process theory is nicely abstracted in what I would now consider a classic reinforcement learning article termed Thinking Fast and Slow with Deep Learning and Tree Search, which introduces expert iteration, something that later become known as alphaGO, and creates hex-playing agents.
The hex-playing agents are ultra-rational within their little hex-world, but rationality of this nature seems to me almost impossible when data is limited and the environment is not that well defined (i.e. the real world). You have to have some irrational, made up beliefs to anchor your decisions on, a narrative of some kind that would help you make sense of your sensory input. I can easily imagine a situation where such beliefs are reinforced through communal language and ritualistic affirmations as part of some positive feedback loop. The interesting question is what it takes to create such a loop and how to go about replacing/changing broken loops.