Abstract

Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive models based on these representations can rely on such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected information by making representations of instances belonging to the same protected attribute class uncorrelated, using the rate-distortion function. FaRM is able to debias representations with or without a target task at hand. FaRM can also be adapted to remove information about multiple protected attributes simultaneously. Empirical evaluations show that FaRM achieves state-of-the-art performance on several datasets, and learned representations leak significantly less protected attribute information against an attack by a non-linear probing network.

Details

Title
Learning Fair Representations via Rate-Distortion Maximization
Author
Somnath Basu Roy Chowdhury; Chaturvedi, Snigdha
Pages
1159-1174
Publication year
2022
Publication date
2022
Publisher
MIT Press Journals, The
ISSN
2307387X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2893948238
Copyright
© 2022. This work is published under https://creativecommons.org/licenses/by/4.0/legalcode (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.