About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Under the Bayesian setting, probabilities represent our belief on the state of the world, which we can update incrementally after each experiment. This naturally leads to sequential processes.
How to choose priors?
If the Bayesian framework is accepted, choosing a prior well becomes an important step of capturing domain knowledge. The pitfall to avoid is actually capturing things that you don't know. The maximum entropy principle is aimed at avoiding that. Chapter eleven of E.T. Jaynes' book "Probability Theory: The Logic of Science" covers the maximum entropy principle for choosing priors and gives it an axiomatic treatment. See my summary slide.
Additional resources include an MIT lecture (starting in the 8th minute to the 35th). "It's the humble way of assigning prior probabilities". It's a fun lecture! If you like it, see the full course information here.
See a presentation on different ways to choose priors that covers, among other things the principle of maximum entropy under constraints. See another presentation here.