Abstract

Technology facilitates harassment and exploitation at levels its creators cannot imagine. Online platforms and products, usually developed with intentions of building communities, enacting social good, and enforcing equality, can have designs and policies that instead perpetuate unequal accessibility, extort money from users, harass people, push marginalization and stigma, and even place people in physical danger. In this thesis, we discuss how the disconnect between the online platform and policy designers and the populations affected by their products can lead to harm to people underrepresented among the decision-makers, and provide recommendations and advice for minimizing these harms.

We begin by exemplifying the need for technologists and policy writers to understand the populations affected by their products through an interview study of sex industry workers, organizations who perform proactive outreach to sex industry workers, and technologists who designed a platform to perform mass unsolicited outreach to sex industry workers. Sex industry workers generally fall along a continuum of autonomy and control over working conditions, with highly autonomous sex workers on one end and survivors of sex tracking on the other. Aiming to facilitate anti-tracking assistance, the technologists worked closely with the users of the platform, namely organization representatives performing outreach. While some organizations are able to provide needed services outside of outreach, the needs of the population targeted and affected by the unsolicited mass messaging technology, namely sex industry workers, were unmet at best. The outreach advertised harmful assumptions, saviorship, and religious ideology without addressing sex industry workers’ motivations or needs regarding privacy and personal safety. While we do not condone the use of mass messaging in this way, we provide recommendations and best practices to mitigate harm for those who nevertheless choose to design anti-tracking technology.

We then broaden our scope to generally study policies affecting sex industry workers through an analysis of the terms of service agreements, community guidelines, privacy policies, and other official policy documents posted by over 100 online platforms. We discuss the laws, perceptions, and motivations behind their policies regarding the sex industry, and how these policies affect sex industry workers. We find that platforms generally view sex industry workers as criminals, victims, spam, or entrepreneurs; we show how using the first three paradigms to characterize the entire industry can lead to stigmatization, overly general and restrictive rules, and decreased accessibility to online life. Our analysis is in line with sex industry worker-led movements to stop arresting sex industry workers, de-stigmatize sex work, and let sex industry workers remain and flourish online. We show how decision makers without subject matter knowledge can unintentionally lead to harm, illustrate the need for a cultural shift in the technology community, and provide concrete research directions to minimize the harms that result when well-meaning but uninformed technologists are left to design online life for a significant portion of the population.

We then move on to a study of the online libel ecosystem: libel sites take content from anonymous contributors who wish to “warn” about, harass, and damage the reputation of other individuals, and post these complaints publicly, often with racist, sexist, or anti-LGBTQ+ slurs, images of the subject, and the subject’s personal contact information. These websites and the individual posts on them are surfaced by online search engines when, for example, a potential employer performs a simple “background check” consisting of a Google search, even if the accusation is irrelevant to the task at hand. While libel sites have no functionality for post removal, they host advertisements for “reputation management” services who charge thousands of dollars for libel post subjects to have their posts removed from libel sites and search engines. Libel post removal is not straightforward; also advertised on these libel sites are reputation management services, which have the unique ability to remove posts from libel sites for paying clients. We collect and analyze 9 libel sites, 7 websites for reputation management services, and 12 related websites. We describe our findings about the online libel ecosystem and how it functions, and we investigate the libel post removal policies of reputation management services and internet search engines. We find that reputation management companies are likely an integral part of the extortion in this ecosystem, and that the only search engine with relevant intervention methods, Google, has limited impact. We make recommendations for policies to battle this harmful ecosystem for internet search engines, payment platforms, and libel sites and reputation management companies, and we also provide suggestions for online libel’s intended audience and writers, unwilling post subjects, and researchers and journalists.

While we end each study with suggestions to mitigate the respective harms studied, we generally argue overall for more careful analysis of context, goals, and impacts of any technology platform’s policies and requiring decision-makers to have subject matter knowledge to prevent well-intentioned but ultimately harmful online platform policies.

Details

Title
Analyzing Harms of Online Platform and Policy Design
Author
Bhalerao, Rasika
Publication year
2022
Publisher
ProQuest Dissertations Publishing
ISBN
9798802703168
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
2670010306
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.