Content area
Abstract
Amid the rapid advances in AI technologies, concerns about regulating AI systems to prevent risks to American lives such as bias, disinformation, safety, national security, and others have drawn considerable attention from the public as well as policymakers. Many fear the potential consequences of AI-driven decision-making in society without sufficient human oversight.
The main argument in this dissertation is that the US AI policies between 2022 and 2024 do not sufficiently address concerns such as bias, misinformation, or labor displacement around AI systems that pose risks to people’s civil rights and liberties. Instead, considerable focus has been laid on addressing AI threats to national security and maintaining US leadership in AI. Additionally, these policies lack adequate legal enforcement mechanisms to be effective in mitigating such risks. To achieve a clear understanding of which policy agendas received greater priority in the AI policy domain, my research project also examines an interrelated issue of how the state is constrained today in regulating AI technologies that have been primarily developed by private companies. To understand this, I critically analyze testimonials by AI experts, policy documents released by the Biden administration, media content, articles, and posts on the website of Google, and SEC documents filed by the company between 2022 and 2024. I also focus on historical policy contexts of big technology platform regulation that have impacted current AI policymaking in the US. I conclude by briefly looking at US AI policy in 2025.





