No idea is really bad, unless we are uncritical. What is really bad is to have no idea at all”

I’v just finished the first volume of George Polya’s books on how to solve problems, *Mathematics and Plausible Reasoning, Volume 1: Induction and Analogy in Mathematics*. I picked up these books, published in 1954 after Jaynes pointed them out as foundational in his book *Probability Theory*. Jaynes has a relative short, straightforward introduction to these concepts, but reading Polya is a delight because these books were aimed at helping math teachers guide their students into understanding how to do mathematics. As Polya points out through mathematical examples and problem sets, we solve problems by coming up with reasonable conjectures about the answer, exploring the consequences of the conjecture, collecting supportive or contradictory evidence, and some times coming to a certainty, a proof of the conjecture.

I’m still, in parallel, working my way through the Jaynes book. This all was triggered by my editing of the On Deciding . . . Better manuscript and realizing I needed to understand the basis of probabilistic reasoning. While Polya was interested in teaching mathematics and Jaynes was interested in correcting some of the errors of reasoning that had been introduced into the practice of statistics, I’m interested in how the brain performs it’s remarkable feats of inference, taking raw sensory input and creating an internal model that reflects the real physical world and the semantic world that we uniquely occupy as humans.

I’ll admit I’m in no hurry to complete these digressions into probability and plausible reasoning, but I do want to get onto another area that I don’t feel I fully grasp, the concepts of cybernetics and system control that were being expressed during the same period as Jaynes and Polya were working. Sadly, I think the lessons they learned were forgotten during periods of great advances in technology and biology, but have emerged again as relevant now that our technology has revealed to us that reductionism reaches explanatory limits when we deal with complexity and emergent phenomena.

These issues are at the root of my long interest in deciding better. Decisions would be easy if were not for the uncertainty we must deal with when the future is not predictable. Decisions would be easy if all values really were denominated in dollars. But our real world is complex and unpredictable. And our values are ill defined and full of conflicting principles. What is truly remarkable to me is that the brain deals with this uncertaintly seamlessly. And for the most part, chooses appropriate action that we understand only on reflection.