Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Digital deliberation has been steadily growing in recent years, enabling citizens from different geographical locations and diverse opinions and expertise to participate in policy-making processes. Software platforms aiming to support digital deliberation usually suffer from information overload, due to the large amount of feedback that is often provided. While Machine Learning and Natural Language Processing techniques can alleviate this drawback, their complex structure discourages users from trusting their results. This paper proposes two Explainable Artificial Intelligence models to enhance transparency and trust in the modus operandi of the above techniques, which concern the processes of clustering and summarization of citizens’ feedback that has been uploaded on a digital deliberation platform.

Details

Title
Explainable Artificial Intelligence Methods to Enhance Transparency and Trust in Digital Deliberation Settings
Author
Siachos, Ilias  VIAFID ORCID Logo  ; Karacapilidis, Nikos  VIAFID ORCID Logo 
First page
241
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
19995903
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3084904940
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.