Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

This study investigates the effectiveness of prompt engineering strategies for Large Language Models (LLMs), comparing single-task and multitasking prompts. Specifically, we analyze whether a single prompt handling multiple tasks—such as named entity recognition (NER), sentiment analysis, and JSON output formatting—can achieve performance comparable to dedicated single-task prompts. To substantiate our findings, we employ statistical analyses, including paired Wilcoxon tests, McNemar tests, and Friedman tests, to validate claims of performance similarity or superiority. Experiments were conducted using five open-weight LLMs: LLama3.1 8B, Qwen2 7B, Mistral 7B, Phi3 Medium, and Gemma2 9B. The results indicate that there is no definitive rule favoring single-task prompts over multitask prompts; rather, their relative performance is highly contingent on the specific model’s data and architecture. This study highlights the nuanced interplay between prompt strategies and LLM characteristics, offering insights into optimizing their use for specific NLP tasks. Limitations and future directions, such as expanding task types, are also discussed.

Details

Title
Comparative Analysis of Prompt Strategies for Large Language Models: Single-Task vs. Multitask Prompts
Author
Gozzi, Manuel  VIAFID ORCID Logo  ; Federico Di Maio  VIAFID ORCID Logo 
First page
4712
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3144067713
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.