17 April 2012
11am - 12noon
Venue: ECE Briefing Room 257
Contact email: firstname.lastname@example.org
Department of Statistics seminar by Associate Professor Ken Rice, University of Washington.
Statistical testing has a long history of controversy; the Fisher and Neyman-Pearson approaches have fundamental differences, and neither of them agree with standard Bayesian procedures. In this talk, we set out an approach to testing that dissipates some of this controversy. Using decision theory, we develop tests as trade-offs, where the user balances potential inaccuracy of a point estimate against the `embarrassment' of making no scientific conclusion at all.
The resulting Bayesian tests are simple, and their repeated-sampling properties can be determined straightforwardly. The same motivation also provides straightforward interpretations of two-sided p-values, calibrating them directly through scientifically-relevant quantities, rather than via statistical evaluation of Type I error rates. Time permitting, extensions to set-valued decisions, model-robust inference and shrinkage estimates may also be considered.