Causally Motivated Shortcut Removal Using Auxiliary Labels
A paper on shortcut removal, published at AISTATS 2022.
Abstract: A flexible, causally-motivated approach to training robust predictors by discouraging the use of specific shortcuts, focusing on a common setting where a robust predictor could achieve optimal iid generalization in principle, but is overshadowed by a shortcut predictor in practice.
Problem setup: We consider a supervised learning setup where the task is to construct a predictor $f(X)$ parameterized by weights w that predicts a label $Y$ (e.g., foreground object) from an input $X$ (e.g., image). In addition, we have an auxiliary label $V$ (e.g., background label) available only at training time that labels a factor of variation along which we hope the model will exhibit some invariance (e.g., background type). $V$ is binary.
We assume that there is a sufficient statistic $X^∗$ such that $Y$ only affects $X$ through $X^∗$, and $X^∗$ can be fully recovered from X via the function $X^∗ := e(X)$.
Method:
- Weighting to Recover $P^◦$
- Use a MMD penalty between ${V=1}$ and ${V = 0}$ as regularization under $P^◦$
Theory:
-
Proposition 1. Under $P^◦$, the Bayes optimal predictor is (i) only a function of $X^∗$, and (ii) an optimal risk-invariant predictor with respect to P.
-
MMD penalty can bound the structural risk gap and the learning gap
-
Enforcing the invariance penalty without reweighting leads to a model that is inconsistent with the causal DAG, and is hence biased