Content area
The advent of Large Language Models (LLMs) has revolutionised natural language processing, providing unprecedented capabilities in text generation and analysis. This paper examines the utility of Artificial-Intelligence-assisted (AI-assisted) content analysis (CA), supported by LLMs, as a methodological tool for research in Information Science (IS) and Cyber Security. It reviews current applications, methodological practices, and challenges, illustrating how LLMs can augment traditional approaches to qualitative data analysis. Key distinctions between CA and other qualitative methods are outlined, alongside the traditional steps involved in CA. To demonstrate relevance, examples from Information Science and Cyber Security are highlighted, along with a new example detailing the steps involved. A hybrid workflow is proposed that integrates human oversight with AI capabilities, grounded in the principles of Responsible AI. Within this model, human researchers remain central to guiding research design, interpretation, and ethical decision-making, while LLMs support efficiency and scalability. Both deductive and inductive AI-assisted frameworks are introduced. Overall, AI-assisted CA is presented as a valuable approach for advancing rigorous, replicable, and ethical scholarship in Information Science and Cyber Security. This paper contributes to prior LLM-assisted coding work, proposing that this hybrid model is preferred over a fully manual content analysis.
Details
Data analysis;
Qualitative analysis;
Validity;
Science;
Large language models;
Hypotheses;
Social networks;
Content analysis;
Cybersecurity;
Discourse analysis;
Researchers;
Audit trails;
Natural language processing;
Ethics;
Generative artificial intelligence;
Chatbots;
Validation studies;
Information science
