Abstract

Online hate is a growing concern on many social media platforms, making them unwelcoming and unsafe. To combat this, technology companies are increasingly developing techniques to automatically identify and sanction hateful users. However, accurate detection of such users remains a challenge due to the contextual nature of speech, whose meaning depends on the social setting in which it is used. This contextual nature of speech has also led to minoritized users, especially African–Americans, to be unfairly detected as ‘hateful’ by the very algorithms designed to protect them. To resolve this problem of inaccurate and unfair hate detection, research has focused on developing machine learning (ML) systems that better understand textual context. Incorporating social networks of hateful users has not received as much attention, despite social science research suggesting it provides rich contextual information. We present a system for more accurately and fairly detecting hateful users by incorporating social network information through geometric deep learning. Geometric deep learning is a ML technique that dynamically learns information-rich network representations. We make two main contributions: first, we demonstrate that adding network information with geometric deep learning produces a more accurate classifier compared with other techniques that either exclude network information entirely or incorporate it through manual feature engineering. Our best performing model achieves an AUC score of 90.8% on a previously released hateful user dataset. Second, we show that such information also leads to fairer outcomes: using the ‘predictive equality’ fairness criteria, we compare the false positive rates of our geometric learning algorithm to other ML techniques and find that our best-performing classifier has no false positives among a subset of African–American users. A neural network without network information has the largest number of false positives at 26, while a neural network incorporating manual network features has 13 false positives among African–American users. The system we present highlights the importance of effectively incorporating social network features in automated hateful user detection, raising new opportunities to improve how online hate is tackled.

Details

Title
Tackling racial bias in automated online hate detection: Towards fair and accurate detection of hateful users with geometric deep learning
Author
Ahmed, Zo 1 ; Vidgen, Bertie 2 ; Hale, Scott A. 3   VIAFID ORCID Logo 

 University of Oxford, Oxford Internet Institute, Oxford, UK (GRID:grid.4991.5) (ISNI:0000 0004 1936 8948) 
 University of Oxford, Oxford Internet Institute, Oxford, UK (GRID:grid.4991.5) (ISNI:0000 0004 1936 8948); Alan Turing Institute, London, UK (GRID:grid.499548.d) (ISNI:0000 0004 5903 3632) 
 University of Oxford, Oxford Internet Institute, Oxford, UK (GRID:grid.4991.5) (ISNI:0000 0004 1936 8948); Alan Turing Institute, London, UK (GRID:grid.499548.d) (ISNI:0000 0004 5903 3632); Meedan, San Francisco, USA (GRID:grid.499548.d) 
Pages
8
Publication year
2022
Publication date
2022
Publisher
Springer Nature B.V.
e-ISSN
21931127
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2628405949
Copyright
© The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.