IST Lunch Bunch
Ranking systems like search engines, recommender systems, and e-commerce sites are intelligent agents that intervene in the world through the rankings they present. This creates potential for misleading and unfair bias among the ranked items in at least two ways. First, the choice of ranking influences which items are likely to receive implicit feedback (e.g. clicks), thus biasing the data that is collected. Second, the rankings could be biased between groups of items (e.g. by political orientation) in how they allocate exposure (i.e. where to rank an item) based on merit (i.e. relevance).
To overcome the first type of bias, this talk explores how to use techniques from causal inference and missing-data analysis to account for selection biases in Learning-to-Rank (LTR). I will derive debiased training objectives based on inverse-propensity-score (IPS) weighting estimators, as well as their implementation in practical methods for LTR. Furthermore, I will propose a new propensity that do not require disruptive randomized interventions. Second, I will discuss how the long-standing foundation of LTR, namely the Probability Ranking Principle, can lead to ranking systems that are unfair, and will propose an alternate ranking principle.