NIPS 2017
Mon Dec 4th through Sat the 9th, 2017 at Long Beach Convention Center
Paper ID: 2135 Optimized Pre-Processing for Discrimination Prevention

### Reviewer 1

This paper introduces a new framework to convert a dataset so that the converted dataset satisfies both group fairness and individual fairness. The present framework is formulated as a convex optimization problem under certain conditions. The authors prove a generalization error bounds in terms of utility and group fairness. Furthermore, they confirmed the performance of their proposed framework by experiments. First of all, the contributions of this paper are not clearly explained. The contributions should be more clearly and precisely described in the introduction. $Details In the theoretical viewpoint, the convexity in Proposition 1 is actually obvious. The authors should discuss that the convexity of the proposed framework has a considerable impact on the tradeoff between utility, group fairness, and individual fairness. Proposition 2 is actually confusing. It ignores the most important term$m$in its proof by introducing the big-O notion. The bounds should be$O(\sqrt{m log(1 + n/m)/n + log(1/\beta)/n})$. The first term in the square root denotes the complexity term since$m$denotes the parameter size of the learning transformation. For example, let the non-discrimination variable form a$d$dimensional binary vector. Then,$m \ge 2^d$, and thus it requires exponential samples with respect to$d\$ to achieve the constant order bound. In the sense of the sample complexity, the present framework does not satisfy learnability. The empirical comparisons between the present method and LFR are not fair. The LFR has tunable parameters to adjust trade-off between utility, group fairness and individual fairness, and thus these parameters should be tuned in an appropriate way. Moreover, the authors should conduct experiments with varying values of \epsilon more finely. The experiments presented in the manuscript cannot support the claim that the proposed framework can control the tradeoff between utility and fairness. Furthermore, the authors should compare the present method with LFR in the sense of individual fairness. There is no evidence that the present method can ensure the individual fairness. Thus, the significance of the contribution seems to be weak in terms of theory and experiments in its current form. ---- After rebuttal discussion, I become more positive on this paper and raised the score.