It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Technology implementations should strive to be impartial, unbiased, and neutral when applied to risk management. Risk management is a critical component within every organization and will always be a crucial component in maintaining the balance between business goals and their impact on society. The aim of this study was to explore ideas around issues including biases, discrimination, privacy, risk thresholds, moral decision-making, and business benefits that challenge beliefs and foundational values when implementing Artificial Intelligence (AI) and Machine Learning (ML) technologies in risk management. A qualitative exploratory study used a phenomenological approach with a constructivist’s worldview to understand the lived experiences of the 15 participants; allowing insights into how to identify and prevent ethical and privacy concerns in the implementation of AI and ML within risk management. This study explored how AI and ML can have biases, promote discrimination, and present increasing privacy concerns. This research also assisted in understanding, exploring, and addressing the complex questions that bias, discrimination, and privacy can bring to any technological implementation. This research utilized participant interviews with industry experts to understand how these issues, personal beliefs, and values intersect with operational values and the implementation of AI and ML risk management. The research sought to explore how these critical components and concepts can help in identifying the implications associated with AI and ML applications in risk management. The research produced a taxonomy that organizations can focus on to ensure ethical AI and ML implementations across all industries. Furthermore, it highlighted critical areas where there is a lack of awareness within the AI and ML processes, presenting opportunities for organizations to begin conversations on how to address these fundamental issues. This research effort offers a framework to prioritize and address these fundamental issues by highlighting the gaps within AI and ML around education, oversight and accountability, risk evaluation, bias, and cultural considerations while ensuring that organizations are adequately considering these crucial areas. The results have revealed that it is critical for companies to truly ensure they understand the risk associated with AI and ML implementations. Additionally, organizations need to consider all options where alternatives could exist as this is critical to maintaining balance and equitable technology implementations. There is a lack of confidence across the industry on whether organizations truly understand the risk associated with the implementation of these technologies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer






