Full Abstract:
In federated learning, fair prediction across various protected groups (e.g., gender,
race) is an important constraint for many applications. Unfortunately, prior work
studying group fair federated learning lacks formal convergence or fairness guaran-
tees. Our work provides a new definition for group fairness in federated learning
based on the notion of Bounded Group Loss (BGL), which can be easily applied
to common federated learning objectives. Based on our definition, we propose a
scalable algorithm that optimizes the empirical risk and global fairness constraints,
which we evaluate across common fairness and federated learning benchmarks.
Our resulting method and analysis are the first we are aware of to provide formal
theoretical guarantees for training a fair federated learning model.
65
0
PROVABLY FAIR FEDERATED LEARNING
attributed to: Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
In federated learning, fair prediction across various protected groups (e.g., gender,
race) is an important constraint for many applications. Unfortunately, prior work
studying group fair federated learning lacks formal convergence or fairness guaran-
tees. Our work provides a new definition for group fairness in federated learning
based on the notion of Bounded Group Loss (BGL), which can be easily applied
to common federated learning objectives. Based on our definition, we propose a
scalable algorithm that optimizes the empirical risk and global fairness constraints,
which we evaluate across common fairness and federated learning benchmarks.
0
Vulnerabilities & Strengths