New paper out for discussion, this one in E-conomics, entitled "Idealizations of Uncertainty, and Lessons from Artificial Intelligence". In it I'm trying to make a point about how the modelling of human uncertainty via probability theory isn't really descriptive science, and how the history of AI teaches us what to expect from such models.For engineering AI systems, whether probability theory is descriptive of humans doesn't matter: who cares if it isn't how people think, as long as it does something useful. Those who use real-world AI systems have learned their limitations, relegating the idea of "expert systems" to "decision support system", and realizing you gotta put a person between the AI and the real decision.But for economic modelling of the human agents that make decisions in the economy, it's another matter, and we need to be aware of the brittleness of old AI, the intractability of scaling up knowledge bases, and the temptation of wishful mnemonics: words that wishfully call a computational construct by the name of human characteristic, while the real similarity isn't established scientifically.Uncertainty, when modelled with probabilities, is certainly a wishful mnemonic. Uncertainty isn't a phenomenon in the world, it is a phenomenon in our minds: the world isn't uncertain, we are uncertain about the world. The fact is we don't have evidence that humans reason with probabilities (in fact, much of the evidence is to the contrary). The reality is we are as unable to build comprehensive probabilistic models of human uncertainty as we are unable to build comprehensive logical models of human expertise, as we discovered 30-40 years ago. Double ditto for models of economics, because economics is all about human actors making decisions under uncertainty.Many economist want a world of crisp, well-informed, rational decision makers, partly because it leads to nice notions of equilibria, which nicely connect to models of free market economics "optimizing value". But that world is not our world. We need to start seeing the real behaviour of people, with all their complexities and emotions, as adaptive and filled with interesting emergent behaviours, but not optimized. This may mean that economics will need to deal less with "value", and more with human values.The great statistician G. E. P. Box said
All models are wrong, some models are useful
I think we really need to start understanding that are models of human decision making under uncertainty are wrong, and try to understand where they are and aren't useful. I hope this paper helps in that understanding.And, BTW, Ex Machina rocks.