Statistics Colloquium Series: Imprecise probability in statistical inference: why it's needed, where it comes from, and how it's beneficial.
About this Event
38.94632483084171, -92.32690555674594
Statistics Department Hosts Weekly Colloquiums where reputed researchers and scholars in the field of statistics give presentations highlighting their work from academia, industry, and government agencies.
Abstract: My basic claim is that the likelihood (data + model) alone can't support reliable probabilistic inference. I'll justify this claim, first, with Fisher's help and, second, via the false confidence theorem. So, to achieve a sort of middle-road -- Basu's via media -- we need to relax either the "reliability" or the "probabilistic" parts. With today's performance-driven methods focus, reliability is non-negotiable, so the only option is to relax probability, i.e., to allow for the right amount of imprecision. This is the *why*. I'll explain the *where* by first eliminating the surprise: imprecision is already a part of what we do day-to-day. Then I'll draw connections to a so-called probability-to-possibility transform, describe my possibilistic inferential model (IM) framework, and show how this corrects Fisher's fiducial argument. An important point is that imprecision isn't a sacrifice -- it's beneficial in many ways. *How* it's beneficial includes giving users the ability to incorporate any available incomplete or partial prior information into the possibilistic IM while maintaining the aforementioned reliability properties. Examples will be used to illustrate all of these points, and I'll also highlight some open problems.
Event Details
See Who Is Interested
0 people are interested in this event