Content area

Abstract

The advent of Large Language Models (LLMs) has revolutionised natural language processing, providing unprecedented capabilities in text generation and analysis. This paper examines the utility of Artificial-Intelligence-assisted (AI-assisted) content analysis (CA), supported by LLMs, as a methodological tool for research in Information Science (IS) and Cyber Security. It reviews current applications, methodological practices, and challenges, illustrating how LLMs can augment traditional approaches to qualitative data analysis. Key distinctions between CA and other qualitative methods are outlined, alongside the traditional steps involved in CA. To demonstrate relevance, examples from Information Science and Cyber Security are highlighted, along with a new example detailing the steps involved. A hybrid workflow is proposed that integrates human oversight with AI capabilities, grounded in the principles of Responsible AI. Within this model, human researchers remain central to guiding research design, interpretation, and ethical decision-making, while LLMs support efficiency and scalability. Both deductive and inductive AI-assisted frameworks are introduced. Overall, AI-assisted CA is presented as a valuable approach for advancing rigorous, replicable, and ethical scholarship in Information Science and Cyber Security. This paper contributes to prior LLM-assisted coding work, proposing that this hybrid model is preferred over a fully manual content analysis.

Full text

Turn on search term navigation

© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.