H.B. Keller Colloquium
Given a prediction problem with real-world tradeoffs, which cost function should the machine learning model be trained to optimize? Unfortunately, typical default metrics in machine learning, such as accuracy applied to binary classifiers, may not capture tradeoffs relevant to the problem at hand. This talk proposes metric elicitation as a formal strategy to address the metric selection problem, specifically by automatically discovering implicit preferences from an expert or an expert panel via interactive feedback. I will primarily focus on algorithms for eliciting classification metrics, showing that simple algorithms are efficient for metric elicitation under broad assumptions. Finally, I will briefly outline early work on metric selection for measuring group fairness in classification problems with sensitive groups.