I saw an astonishing book at Borders today:
If I gathered correctly, under the rubric of AI, Pearl essentially provides rigorous analysis of how to reduce a system of weakly held beliefs into the optimal decision. I am not sure I still have the brain cells aligned right to absorb it very quickly, but it seemed to me potentially very important, and all my plausible inference circuits were telling me the man did it right.
Of course it is heavily Bayesian. It’s all about a complex set of relationships between prior beliefs and their consequences.
If my cursory reading is correct, he claims he could systematically reduce a sufficiently formally stated set of beliefs into the most plausible mutually consistent subset, and use them formally to offer likelihood ranges on a decision. Exclamation point.
In the end, I left it there. Maybe if it had been $20 instead oif $85 I’d have picked it up on the spot. I hope someone with their math neurons fully engaged who’s interested in policy will give it a thorough reading. It’s in a sense quite miscast as an AI book.
I don’t think the perfectly reasoned response to uncertain information matters as much as it should, but I also don’t think the methodology is practicable in practice for very large problems like climate policy. It’s the estimate of prior belief that is the problem. I am sure the monkeywrenchers corporate and anticorporate will be doing their best to prevent coming to sensible conclusions anyway.
I often see my civilized, calm and safe European colleagues (Annan and Gerhauser in particular come to mind) talking about optimal paths and controllable risks. This all stuns me. Of course the optimum policy exists. It appears there are better developed tools for obtaining that optimum than I knew about. Nevertheless, people will not concede their sovereignty to a formula, no matter how cleverly construed, certainly not in all countries and certainly not at all times, on any known precedent. They will cling to their illusions, and some of those illusions will be dangerous, as we ought to have learned from the Easter Islanders.
What good is plausible inference when based on demonstrably inconsistent and yet strongly held views? How can a democracy take account for such an optimization when easily half the population believes things that are impossible?
This is why we will be very lucky to avoid a great global population crash sometime in the relatively near future. It’s not that we can’t see things coming, it’s that we don’t always find ourselves in societies with the capacity to react to the balance of evidence.