Content area
This paper examines university students' perceptions of and experiences with using ChatGPT, a generative artificial intelligence (GenAl) tool, to enhance their experiential learning. In this exploratory study, we designed a ChatGPT learning activity flow corresponding to the four experiential learning steps. Analysis of survey data collected from 70 students in a business college at a public university in the United States revealed that, under the guidance of the instructors, students learned to interact with ChatGPT through two prompts. Quantitative analysis suggested that the knowledge type and the associated cognitive process of studentcreated prompts depended on those of the prompt provided by their instructor, controlled by students' prior ChatGPT experience. In addition, qualitative data analysis revealed that students considered the GenAl tool helpful with their learning tasks and were satisfied with the content generated by ChatGPT. However, some students raised concerns about ChatGPT output involving metacognitive knowledge. Three themes emerged regardless of students' prior ChatGPT experience, but some subtle differences were observed. Our findings extend the literature on experiential learning and Bloom's taxonomy in the context of adopting GenAI in higher education. The study also contributes to Information Systems education by revealing challenges, offering suggestions, and proposing principles for GenAl-assisted learning. The paper concludes with suggestions for future research and policy making.
ABSTRACT
This paper examines university students' perceptions of and experiences with using ChatGPT, a generative artificial intelligence (GenAl) tool, to enhance their experiential learning. In this exploratory study, we designed a ChatGPT learning activity flow corresponding to the four experiential learning steps. Analysis of survey data collected from 70 students in a business college at a public university in the United States revealed that, under the guidance of the instructors, students learned to interact with ChatGPT through two prompts. Quantitative analysis suggested that the knowledge type and the associated cognitive process of studentcreated prompts depended on those of the prompt provided by their instructor, controlled by students' prior ChatGPT experience. In addition, qualitative data analysis revealed that students considered the GenAl tool helpful with their learning tasks and were satisfied with the content generated by ChatGPT. However, some students raised concerns about ChatGPT output involving metacognitive knowledge. Three themes emerged regardless of students' prior ChatGPT experience, but some subtle differences were observed. Our findings extend the literature on experiential learning and Bloom's taxonomy in the context of adopting GenAI in higher education. The study also contributes to Information Systems education by revealing challenges, offering suggestions, and proposing principles for GenAl-assisted learning. The paper concludes with suggestions for future research and policy making.
Keywords: Artificial intelligence, ChatGPT, Generative Al, Experiential learning & education, Bloom's taxonomy, Higher education
1. INTRODUCTION
Information technologies have been known to both disrupt and transform education. Yet not many technologies have caused such a broad and rapid disruption as generative artificial intelligence (GenAI) has in the past two years. One such tool is ChatGPT, a virtual chatbot based on GenAI technology that can mimic the grammar and structure of writing and produce digital content on various topics (Hao, 2023). Debuted in November 2022, ChatGPT reached 100 million monthly active users just two months after its launch (Milmo, 2023). According to OpenAI (2022), a GenAI system such as ChatGPT is driven by a neural network model that relies on cutting-edge technologies such as machine learning, natural language processing (NLP), and deep learning. It has been trained using massive amounts of data to generate responses. Due to its ability to understand natural human language and generate contextually coherent responses, ChatGPT has attracted substantial attention from industry to academia.
A professor at the Wharton School of the University of Pennsylvania found that ChatGPT would pass a final exam in a typical MBA class (Terwiesch, 2023). This sparked a national conversation about the ethical implications of using GenAI in education. While some educators have sounded the alarm over the potential abuse of ChatGPT for cheating and plagiarism, which could eventually affect student's creative thinking and reasoning (Arif et al., 2023), others are optimistic about the potential benefits of using ChatGPT to support and enhance student learning, such as promoting a sense of community and increasing motivation and engagement among self-taught learners (Firat, 2023).
Despite the general arguments among educators, there is a consensus that prompt engineering, e.g., an individual's ability to provide effective user input (referred to as a prompt) to a GenAI tool, is essential for generating useful output from GenAI, such as ChatGPT (Baidoo-Anu & Owusu Ansah, 2023; Bowen & Watson, 2024). In introductory programming courses, this type of prompt engineering has demonstrated its potential to become a useful learning activity, one that promotes students' computational thinking skills and is likely to change the nature of code-writing skill development (Denny et al., 2023). Moreover, analysis of individual interactions with computer technologies shows that previous experience with technology may be a differentiating factor (O'Brien et al., 2012). However, our knowledge about student use of GenAI technology remains limited. In this study, we aim to understand university students' perceptions of and experiences with using ChatGPT. In particular, the study seeks to answer three research questions (RQ): (1) Are student prompts to ChatGPT influenced by instructor guidance during the learning process? (2) How does ChatGPT-generated information meet or fail to meet student expectations? (3) How does students' prior experience with ChatGPT relate to their expectations in this learning process?
Experiential learning refers to "the process whereby knowledge is created through the transformation of experience" (Kolb, 1984, p. 38). It emphasizes the importance of experience in the learning process. In this study, we consider ChatGPT an exploratory tool in learning due to its interactive and adaptive nature. We designed a ChatGPT activity flow to facilitate experiential learning by engaging students with the interactive features of the GenAI tool and implemented the activity in six courses in the business college of a public university in the United States. Students practiced with two prompts using ChatGPT, one prepared by the instructor and another created by themselves. They were then invited to complete a survey to share their experiences with the GenAI tool. A total of 70 students participated in the survey. Informed by the revised Bloom's taxonomy (Anderson et al., 2001), which characterizes different knowledge types and cognitive processes in learning, and which complements the experiential learning model's knowledge acquisition cycle, we employed a qualitative method to code the knowledge types and cognitive processes of the prompts created by the instructors and students. A quantitative analysis of the data showed that the knowledge types and cognitive processes of the student-created ChatGPT prompts depended on those of the first prompt, which was provided by their instructor. Additional qualitative analysis revealed three themes in student experiences with ChatGPT.
The rest of the paper proceeds as follows: Section 2 discusses the theoretical background and framework; Section 3 describes the research methods; Section 4 reports research findings; Section 5 discusses the contributions and implications of the study; and Section 6 concludes the paper with suggestions for future research and policy making.
2. THEORETICAL FRAMEWORK
2.1 Generative AL and Education
Research has shown that ChatGPT can be an effective tool in helping students with argumentative essay writing, generating outlines and examples for PowerPoint slides, and creating descriptions of the perceived images to be used as prompts for Al-powered image generation (Liu et al, 2024). The effectiveness of ChatGPT as a learning tool has been associated with its ability to adapt to personalized needs and provide spontaneous responses. The benefits of ChatGPT in education have been commonly accepted as promoting personalized and interactive learning as well as generating prompts for formative assessment (Baidoo-Anu & Owusu Ansah, 2023). Educators have also identified situations when the GenAl tool will be most effective. For example, based on their experience with design science research courses, Memmert et al. (2023) found that AIgenerated, content-level scaffolding might be a way to support students, particularly when they have nobody around to challenge their ideas. Moreover, Chang et al. (2022) highlighted that using an Al-powered mobile chatbot significantly improved students' learning achievement and self-efficacy, thanks to the NLP and the user-friendly features of the chatbot.
Similarly, the use of ChatGPT in information systems (IS) and computing education has attracted increasing attention. Because ChatGPT uses NLP and machine learning technologies to understand the user's needs and respond accordingly, it provides responses without using syntax and concepts specific to programming languages, thus offering a different approach to IS and computing education (Denny et al., 2024). For example, using ChatGPT in their weekly programming practice, students in a computer programming course were rated higher in their computational thinking skills, programming selfefficacy, and motivation for the lesson than the cohort who did not use ChatGPT (Yilmaz & Yilmaz, 2023). As a GenAlpowered tool that can adapt to individual needs, ChatGPT has shown a potential to transform the creation and customization of educational resources such as programming exercises, enabling the efficient generation of personalized learning materials.
While acknowledging the benefits and opportunities brought by the new Al technology, educators have also raised concerns about misuse of the Al tool, biases inherent in Algenerated responses, and academic integrity (e.g., Arif et al., 2023; Baidoo-Anu & Owusu Ansah, 2023; Terwiesch, 2023). These concerns require curriculum reforms that consider the interactions among the learner, teacher, and GenAl (Nguyen et al., 2024). In IS and computing education, challenges arise from adapting to large language models (LLMs) capable of generating accurate source code from natural-language problem descriptions and from concerns about learner over-reliance, harmful biases, and bad habits arising from using ChatGPT in programming education (Denny et al., 2024). Therefore, educators need to not only take advantage of the potential benefits of GenAl but also reflect on how to respond to the threats and opportunities presented by these new technologies. For example, focusing on student users as learners of the Al technology, Black (2023) outlined three key principles: developing user knowledge and skills in understanding technical capability, identifying biases in computer data and models, and becoming continual learners. Van Slyke et al. (2023) focused on educators and recommended that faculty invest time in learning the general capabilities of Al and consider how to modify course activities and assessments to encourage students' ethical and effective use of Al tools. In addition, Denny et al. (2024) called for computing educators to design new pedagogical approaches, such as introducing LLMs early in a programming course and asking students to focus on writing task specifications. As advocated by Chen (2022), IS programs should be the leaders in Al curriculum development, addressing the demands of industry and preparing business school students for future technology-driven business innovation.
All the strategies suggested above are informative. We argue that the successful implementation of strategies for developing student Al literacy, investing in faculty training, and designing new pedagogies relies on our understanding of current student ChatGPT use and students' perceptions of the impact of this Al tool on their learning. In this regard, our study focuses on student experiential learning through the design and implementation of learning activities using ChatGPT, as an effort to respond to the call for educators to integrate GenAl into education, including IS and computing education, to adapt to the rapidly changing technology (Nithithanatchinnapat et al., 2024). Because of our research focus on student experiences, we draw upon Kolb's (1984) experiential learning model for insights.
2.2 Kolb's Experiential Learning Model
Experiential learning refers to the process of knowledge acquisition through experience. Developed by Kolb (1984), the Experiential Learning Model entails four steps: concrete experience (CE), reflective observation (RO), abstract conceptualization (AC), and active experimentation (AE). In the CE stage, the learner has hands-on experience in achieving a learning outcome. In the RO stage, the learner reflects on and reviews the experience from a range of different perspectives. In the AC stage, the learner analyzes and connects the experience to previous learning, developing new ideas about the subject matter. In the AE stage, the learner acts on their new ideas by experimenting in an experiential setting. According to Kolb (1984), all four learning stages must be completed for learning to be most effective.
As new ideas are put into action, a new cycle of experiential learning begins. The four-step learning process, Experience - Reflect - Think - Act, is often applied multiple times in every interaction and experience. As such, knowledge is gained through both personal and environmental experiences; learning is achieved through a continuous cycle of inquiry, reflection, analysis, and synthesis (Kolb, 1984). In education, examples of experiential learning activities include applied research projects, case studies, field experience, simulations, and labs (Kolb & Kolb, 2017).
The experiential learning model has been adopted in the learning and use of information technologies. For example, Lai et al. (2007) found that using mobile technologies while going through the four stages of an experiential learning process helped elementary school students improve their knowledge. Similarly, Deng and Chi (2015) adopted Kolb's (1984) Experiential Learning Model as a framework to capture different aspects of individual learning experiences with a new enterprise system and presented experiential learning at two different levels (individual vs. community) in a knowledge network.
In summary, experiential learning centers on conversion of explicit knowledge to tacit knowledge, so the type of knowledge involved in the learning process is important. The revised Bloom's taxonomy by Anderson et al. (2001) characterizes four types of knowledge distributed across different cognitive processes in learning, thus offering further insights into understanding experiential learning.
2.3 Revised Bloom's Taxonomy
Named after American educational psychologist Benjamin S. Bloom, Bloom's taxonomy is a multi-tiered model that classifies thinking behaviors according to different levels of cognitive complexity in the learning process (Forehand, 2010). Since its inception in 1956, Bloom's taxonomy has been reinterpreted in a variety of ways. The revised Bloom's taxonomy, by Anderson et al. (2001), expanded the concept into two dimensions: the knowledge dimension and the cognitive process dimension.
The knowledge dimension ranges from concrete (factual) to abstract (metacognitive). The different knowledge types include (i) Factual: knowledge of terminology, specific details and basic elements within a domain, such as the design specifications of a product for sale; (п) Conceptual: knowledge of classifications and categories; knowledge of principles and generalizations; knowledge of theories, models, and structures; examples include knowledge of a product's advantages and disadvantages over the product of its competitors; (iii) Procedural knowledge includes subject-specific skills, algorithms, techniques, methods, and criteria for determining when to use appropriate procedures; it is about knowing "how to" through practice, such as knowing how to identify the best location for opening a new retail store; and (iv) Metacognitive: strategic knowledge; knowledge about cognitive tasks, including appropriate contextual and conditional knowledge, such as understanding values and unique contexts brought by clients from different cultures. It should be noted that these knowledge types are not necessarily linear, as procedural knowledge may not be more abstract than conceptual knowledge.
On the other hand, the cognitive process follows a more clearly defined hierarchical order. The levels of learning from low to high can be defined as (i) Remembering: Retrieving, recognizing, and recalling relevant knowledge from long-term memory; (ii) Understanding: Constructing meaning from oral, written, and graphic messages through interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining; (iii) Applying: Carrying out or using a procedure through executing or implementing; (iv) Analyzing: Breaking material into constituent parts and determining how the parts relate to one another and to an overall structure or purpose through differentiating, organizing, and attributing; (v) Evaluating: Making judgments based on criteria and standards through checking and critiquing; and (vi) Creating: Putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing.
Bloom's taxonomy was developed in and applied to traditional educational pedagogy. As information and communication technology, such as GenAl, increasingly penetrates the classroom, the learning process of students will be affected by their experience with using GenAl for learning. Thus, through integration of the two theories discussed above, this study seeks to enhance our understanding of the education landscape transformed by the new Al technology.
3. RESEARCH METHODS
This research is exploratory in nature, as the research topic of ChatGPT is new and under-investigated in the higher education field. An exploratory study is "a broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding" (Stebbins, 2001, p. 3). It typically does not employ as rigorous a methodology as is often used in conclusive studies (Nargundkar, 2008) but allows for flexibility and adaptability (Saunders et al., 2012).
To address the research questions, we designed a ChatGPT learning activity flow based on Kolb's Experiential Learning Model and conducted an online survey asking students about their perceptions of and experience with ChatGPT. Guided by the revised Bloom's taxonomy, we used qualitative methods to code the knowledge types and cognitive processes of the ChatGPT prompts created by the instructors and the students. To examine RQI, i.e., the dependence of student-created ChatGPT prompts on those created by the instructors, we performed quantitative analysis using contingency tables and Fisher's Exact Tests (Agresti, 1992; Healey, 2021). To answer RQ2 and RQ3, we conducted thematic analysis (Braun & Clarke, 2006) to identify themes in student narratives while considering students' prior experience with ChatGPT.
3.1 Research Site and Participants
Students from an urban public university in the United States participated in the study. The university is known as a minorityserving institution with 69.4% of students being Hispanic or Latino, 10.9% Black or African American, 5.8% White, and 7.5% Asian or Pacific Islander (2022-2023 university data).
The researchers created a student survey with 28 multiplechoice and open-ended questions hosted on a universitylicensed online survey platform. Key survey questions included "Prior to the class where you received this survey, have you used ChatGPT for any task?" "If yes, what tasks did you use ChatGPT for? Please provide an example and explain." "How many times did you ask the ChatGPT to generate a response to the FIRST [i.e., the instructor-provided] question?" "Did the ChatGPT-generated response to the FIRST question meet your expectations? Please explain." "How many times did you ask the ChatGPT to generate a response to the SECOND [1.е., student-generated] question?" "Did the ChatGPT-generated response to the SECOND question meet your expectations? Please explain." The survey also included questions on student demographics such as age, gender, and race or ethnicity. The survey was approved by the Internal Review Board (IRB) of the university.
The researchers invited instructors in the business college to participate in the study by adapting and implementing a ChatGPT learning activity in their courses and disseminating the survey to their students. Four instructors voluntarily agreed to participate and distributed the survey to students in the six courses they were teaching. The six courses cover multiple disciplines the business college offers: information systems, business communication, operations management, criminal justice, and public administration. They also represent three modalities: traditional on-campus, hybrid, and asynchronous online. Student participation in the study was voluntary and anonymous. The survey was distributed by the instructors to their students during the last two weeks of classes ending in May, June, or July 2023. After two email reminders, a total of 70 students (with an overall response rate of 40.2%) responded to the survey. Table 1 summarizes the course level, modality, and responses of each participating class.
Among the 70 participants, 53 indicated that they had no prior experience with ChatGPT. The 17 students who had some experience with ChatGPT mentioned that they had used the GenAl tool in another class (n=8), for work (n=1), during job applications (n=4), or for other personal interests (n=4).
Most student participants came from ethnic backgrounds. Also, 60% were first-generation college students (FGCS) and 81.5% were employed full-time or part-time. Since the six classes participating in the study were higher-level (300-level or above) university courses, most students were junior level or above, and four-fifths were age 22 or older. In general, the sample statistics are consistent with the university student demographics in terms of ethnicity, Pell-grant eligibility, employment status, and gender, while having a higher representation of FGCS and graduate students. This is a convenience sample. Table 2 presents the demographic information for the students in our sample.
3.2 Procedures
Based on Kolb's Experiential Learning Model, we designed a ChatGPT learning activity flow that corresponded to the four continuous learning steps. The objective was to engage students with the interactive features of ChatGPT and use it as a tool to facilitate experiential learning of the specific course contents. Step 1 Experiencing: The instructor asked students to read the article titled "What Is ChatGPT? What to Know About the Al Chatbot" (Hao, 2023), which gave a basic introduction to the Al tool. Students were also instructed to create a free account for ChatGPT (powered by GPT-3.5) on the OpenAl website. Next, we asked the instructors to provide a course-related prompt (the FIRST question) that aligned with the learning objectives of their specific courses. The instructors then asked students to use ChatGPT to generate an output to this instructorcreated prompt. The instructors indicated that students could click on "Regenerate response" to get another answer if they disliked the previous one. Step 2 Reflecting: Students were invited to participate in an anonymous survey created by the researchers by answering the survey question "Did the ChatGPT-generated response to the FIRST question meet your expectations? Please explain." Step 3 Thinking: The instructor asked students to create a course-related prompt themselves. The instructor also provided two sample prompts for students. Students then created their prompt (the SECOND question). Step 4 Acting: Students entered the second prompt into ChatGPT for a response and then answered the survey question: "Did the ChatGPT-generated response to the SECOND question meet your expectations? Please explain." Figure 1 illustrates the ChatGPT learning activity flow.
As depicted in Figure 1, with the guidance of the instructors, students completed the four steps of experiential learning. Steps 1 and 3 are observed and analyzed to address the first research question; Steps 2 and 4 are observed and analyzed to address the second and third research questions.
3.3 Data Coding
Guided by the revised Bloom's taxonomy (Anderson et al., 2001), we (two researchers) first coded the instructor-provided ChatGPT prompts and student-created ChatGPT prompts in terms of different knowledge types and cognitive processes. Then, we coded all responses independently. The inter-rater reliability (by two researchers), measured by the percentage agreement, was high for the knowledge types reflected in the two ChatGPT questions (94% and 100%, respectively) but relatively low for the cognitive processes related to them (67% and 83%, respectively). We then discussed the coding and resolved the coding disagreements.
In addition, following the thematic analysis procedure described by Braun and Clarke (2006), we coded student responses to the two open-ended questions asking whether the ChatGPT-generated responses met their expectations. We started with open coding to identify an initial pool of codes, followed by grouping codes into higher-order themes based on commonalities among first-order codes. The coding was done iteratively: we screened all collated extracts for each theme to refine and revise the themes as needed. Finally, we checked for any missing codes, organized subgroup codes in a hierarchical structure, and finalized the themes. We coded the data independently by following the coding scheme. The inter-rater reliability (by two researchers) of the coding results is satisfactory, with a Cohen's Kappa index of 0.72 and 0.80 for student feedback to the two ChatGPT-generated responses, respectively. We discussed and resolved the coding disagreements and reached a consensus on all coding. Table 3 provides the coding of the six instructor-created prompts for ChatGPT (one prompt per class).
4. FINDINGS
The analysis shows that under the guidance of the instructors, and through the four steps of experiential learning, students learned to use ChatGPT to explore questions of various knowledge types and different cognitive processes. The results of the quantitative analysis suggest that the knowledge types and cognitive processes in the student-created ChatGPT prompts depend on the first practice prompt provided by their instructor, controlled by students' prior ChatGPT experience, which answers RQ1. Moreover, the findings of the qualitative analysis reveal that students are mostly satisfied with ChatGPT responses in terms of accuracy, timeliness, clarity, depth, and the organization of the answers while a few students raised concerns about ChatGPT-generated responses, addressing RQ2. Finally, students' satisfaction with ChatGPT does not seem to be substantially affected by their prior ChatGPT experience, shedding light on RQ3. Sections 4.1 and 4.2 below report the findings related to RQ1, while Sections 4.3 and 4.4 present findings associated with RQ2 and ВОЗ, respectively.
4.1 Dimensions of the ChatGPT Prompts
Comparing the ChatGPT prompts generated by the instructors to those generated by students, we found that instructors and students emphasize different knowledge types and cognitive processes. Table 4 presents the distribution of the instructorcreated and student-created ChatGPT prompts.
As shown in Table 4, 37 students (55.2%) practiced an instructor-provided - ChatGPT prompt that involved metacognitive knowledge, followed by 22 (32.8%) for procedural knowledge and eight (11.9%) for conceptual knowledge. However, when students were asked to create their own ChatGPT prompts, the distribution of the prompts by knowledge type shows a different pattern. Unlike their instructors, only nine (13.6%) of the student-generated prompts involve metacognitive knowledge. The majority (65.2%) of the student-generated prompts require conceptual knowledge, followed by procedural knowledge (21.2%). Moreover, students engaged in the cognitive processes differently. The instructor-provided prompts demonstrate all five cognitive processes more evenly: 19 (28.4%) for applying, 17 (25.4%) for evaluating, 14 (20.9%) for analyzing, 11 (16.4%) for understanding, and six (8.9%) for creating. However, most of the student-created ChatGPT prompts involve the lower-level cognitive processes such as understanding (60.6%) and applying (24.2%).
4.2 Dependence of Student-Created ChatGPT Prompts on Students' Prior Learning
Our analysis shows that the knowledge types and cognitive processes of the student-created ChatGPT prompts depend on those of the first prompt, the one provided by the instructor for students to practice, especially among students who had no prior experience with ChatGPT. Tables 5 and 6 are contingency tables displaying the frequency distributions of knowledge types and cognitive processes of the two ChatGPT questions, controlled by student's prior experience with ChatGPT.
Fisher's Exact Test is used to determine whether there are nonrandom associations between two categorical variables (Agresti, 1992). We chose this test instead of a chi-squared test because some cells have fewer than five observations, which violates the assumption of the chi-squared test. Another benefit of Fisher's Exact Test is that it generates more conservative results than does the chi-squared test.
As shown in Table 5, among the 46 students who had never used ChatGPT, the result of Fisher's Exact Test (p = 0.028) indicates a statistically significant relationship between the knowledge types of the two prompts at the 0.05 level. In particular, if the first prompt provided by the instructor 1s about metacognitive knowledge, compared to other knowledge types, students are more likely to propose a ChatGPT question involving conceptual or metacognitive knowledge. However, the relationship is not statistically significant among the 17 students who had some experience with ChatGPT before taking the course (Fisher's Exact Test p = 0.317).
Similarly, we analyzed whether the relationship of cognitive processes between the two prompts differed by students" prior ChatGPT experience. As shown in Table 6, among the 46 students who had never used ChatGPT, the result of Fisher's Exact Test (р = 0.002) indicates a highly statistically significant association between the cognitive processes related to the two prompts at the 0.01 level. Specifically, if the first prompt involves the analyzing level of cognition, compared to other levels, students are more likely to create a ChatGPT prompt at the analyzing or higher cognitive level. Interestingly, if the first prompt involves the evaluating level of cognition, compared to other levels, students are more likely to create a ChatGPT prompt at the understanding cognitive level. However, using the 0.05 significance level, the relationship is not statistically significant among the 17 students who had some experience with ChatGPT (Fisher's Exact Test p = 0.087).
4.3 Student Perception of ChatGPT-Generated Information
To understand student perceptions of ChatGPT-generated information related to their learning, we analyzed the qualitative data from student responses to the two open-ended survey questions asking whether ChatGPT-generated information for the two prompts met their expectations. From the 136 responses (68 responses for each prompt), our analysis revealed three major themes, each corresponding to different subcategories within the cognitive process. These findings are summarized in Table 7 and elaborated in this section.
4.3.1 Theme 1: Positive Experience with Using ChatGPT for Learning. Student narratives reflect the overwhelmingly positive responses from different perspectives, including (i) being impressed by the speed and content of ChatGPTgenerated responses (i.e., fast, accurate, comprehensive, detailed, thorough); (ii) enjoying its human-like interaction; responses reflected the interpreting or summarizing sub-level of the cognitive process (within the understanding level); and (iii) perceiving ChatGPT as helpful for a learning task.
First, the students elaborated on how they used ChatGPT to assist their learning. One student wrote: "I would say it did meet my expectations because in the beginning, it stated what needs to be calculated in order to answer the question. It also showed the work and broke it down" (ID 65; female, Hispanic, senior, FGCS). Moreover, students appreciated the relevance of information generated by ChatGPT: "ChatGPT"s answer was very much like the material we studied in class. The answer was detailed and very thorough. I feel like I know the material a lot better now" (ID 84; male, senior, employed part-time, White or Caucasian, Pell-eligible, non-FGCS). Other study participants offered similar assessments of ChatGPT-generated content, including information accuracy, completeness, and relevance. These dimensions represent the key elements of information quality (Lee et al., 2002). In this case, students were impressed by the quality of ChatGPT-generated information.
In addition, some students enjoyed the friendly, human-like interaction with the Al tool. As one student explained: ChatGPT answered my question exactly like a normal person which is pretty crazy to me but I can see why it is such a huge tool that needs to be learned. It gave numerous things to think about when it comes to preparing for an interview. It did not just give me a half answer. It seems like it truly is trying to help (ID 119; male, senior, employed full-time, Black or African American, Pell-eligible, FGCS).
As a result of the positive experience, many participants clearly indicated that they would continue to use ChatGPT to improve learning (e.g., writing). For example, some students asked ChatGPT questions about writing resumes and preparing elevator pitches for job interviews and responded: "Yes, 1 received plenty of feedback such as avoiding clichés, highlighting transferable skills, organizing my writing and being specific as well as providing evidence. This was very helpful and will be adjusted to my resume" (ID 107; male, senior, employed full-time, Hispanic, Pell-eligible, FGCS).
4.3.2 Theme 2: Knowledgeable About Evaluating ChatGPT Responses and Conducting Further Inquiries. Students showed competency in evaluating the first responses generated by ChatGPT and made further inquiries when they felt the first response did not provide sufficient details. For the instructorcreated first question, 47 (67.1%) students asked ChatGPT once, 19 (27.1%) asked twice, and four (5.7%) asked three times. For the second prompt, which was created by the students, the number of students who entered prompt once has increased to 55 (78.6%) while fewer students (14 or 20%) entered it twice, and only one student (1.4%) entered three times.
The accuracy and correctness of ChatGPT responses required the assessment of its users. Students were expected to be knowledgeable about the subject or topic underlying their ChatGPT questions. When they believed that the first answer provided by ChatGPT did not meet their expectations, students asked ChatGPT to regenerate a response. However, even when the first answer was satisfactory, some students proceeded with the request to regenerate a response. The different motivations for further attempts were manifested in the student remarks, which reflected the exemplifying, classifying, or comparing sub-level of the cognitive process (within the understanding level). For example, one student stated: The initial response 1 received from ChatGPT met my expectations, however, I wanted to see what other response I would receive if I generated another response. The second response I received was similar to the first, but the second response provided formulas to solve the productivity rate as opposed to just the calculation (ID 75; female, senior, employed part-time, White or Caucasian, Pelleligible, FGCS).
4.3.3 Theme 3: Raised Concerns About ChatGPT Responses. In some cases, mostly in the domain of metacognitive knowledge, students identified the weak portion of the ChatGPT output, and their responses demonstrated the inferring or explaining sub-level of the cognitive process (within the understanding level). For example, one student mentioned: /t /ChatGPT] did meet most of my expectations because it did give a detailed and clear response. However, it failed in the data analysis part of the question because it seemed to produce the same exact answer for all questions; it just changed the language. I think this is dangerous considering that you cannot exactly generate quality data or analysis (ID 77; female, junior, employed part-time, Black or African American, Pell-eligible, non-FGCS).
In this example, the ChatGPT prompt involved creating metacognitive knowledge. The student entered the research question "How has the COVID-19 pandemic affected mental health among college students?" and asked ChatGPT to create a research design that uses survey methods, including creating plans for selecting a research method, identifying sampling strategies, collecting data, and analyzing data. The prompt was complex and involved a higher-level knowledge type and cognitive process, which may partially explain why the student felt the ChatGPT answer did not address all parts of the question adequately.
This student's experience suggests that the knowledge type and cognitive process that a prompt invokes in ChatGPT may affect the relevance of the answers it generates. For the prompts involving the understanding and application of conceptual or procedural knowledge, our student participants showed their satisfaction with the ChatGPT answers. However, for the prompts involving a higher-level knowledge and cognitive process, such as creating metacognitive knowledge, the participants offered some cautions as shown in the above example.
4.4 Dependence of Students" Perceptions of Current Experiential Learning on Their Prior ChatGPT Experience We compared the frequency of the three themes revealed in the narratives of the two groups of students (with or without prior ChatGPT experience) but did not find substantial differences. First, most students in both groups responded positively to their ChatGPT interactions (Theme 1), at 68% and 67%, respectively. However, students with prior experience were less likely to demonstrate their knowledge in evaluating ChatGPT responses (Theme 2) than those with no prior experience, at 21% and 25%, respectively. In the following remark, a student from an IS course created a database-related prompt and shared his assessment of the quality of the ChatGPT output. This student had no prior experience with ChatGPT. The second question that I asked Chat GPT was "Provide an example about normalization [in database design]." The response that I received did meet my expectations. I had asked for a second response only to check if the system was sure about the first response that it gave me. Although ChatGPT has been able to provide good responses it is important to double check the responses (ID 64; male, senior, employed part-time, Hispanic or Latinx, Pell-eligible, FGCS).
Moreover, students with prior experience were more likely to raise concerns about ChatGPT responses (Theme 3) than those students without prior experience, at 12% and 9%, respectively. For example, a student who took the IS course on database systems indicated that he had experience using ChatGPT to debug his python scripts. When practicing with the ChatGPT prompts in the study, the student raised the concern that "ChatGPT provided only the information needed to answer the question and it assumes you already know the material and gives you the answer." This concern raised the warning that ChatGPT-generated output may not be tailored to the knowledge level of a user. The student thus offered a few tips to his fellow schoolmates on how to use the GenAl tool effectively: Instead of asking questions to find the answer, ask the machine how to do the problem, which, most of the time it already explains it. I do not use ChatGPT unless it's confirmed we can use outside sources. But daily for my learning process of cyber security, hacking, and coding scripts (ID 49; male, junior, employed part-time, White or Caucasian, Pell-eligible, FGCS).
5. DISCUSSION
5.1 Theoretical Contributions
This exploratory study contributes to the theory of experiential learning in two aspects. First, using Al tools such as ChatGPT makes experiential learning more individualized, interactive, and accessible, compared to traditional experiential learning activities such as case studies, field experiences, and simulations. With the provision of quick feedback in natural language, the GenAI technology speeds up the process of experiential learning, compared to traditional approaches. Second, the careful design of the ChatGPT learning activity flow highlights the importance of the instructor as a facilitator in enhancing Al-assisted experiential learning. In a traditional experiential learning environment, a facilitator is not essential to experiential learning; rather, the essential mechanism is the learner's reflection on experiences using analytic skills (Rodrigues, 2004). However, our study shows that in the AIassisted experiential learning process, not only does clear stepby-step instruction matter, but the first training or practice activity provided to students affects their learning.
Furthermore, the study contributes to Bloom's taxonomy, as it shows that using ChatGPT to gain metacognitive knowledge at the analyzing level of the cognitive process seems to bring the most benefits to students in terms of expanding the scope of their learning. In addition, the study deepens our comprehension of the cognitive process. According to the revised Bloom's taxonomy, the understanding level of the cognitive process is divided into several separate categories such as interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining. The three themes discovered from student feedback about ChatGPT answers represent different subcategories of the understanding level.
5.2 Practical Implications
The study has practical implications for enhancing the effectiveness and efficiency of teaching and learning in the context of GenAI. The implications for educators are two-fold: First, as evidenced in the four stages of experiential learning, it is important for a properly designed learning activity flow to include clear instructions and reflection activities. Second, as GenAl is increasingly used by students, it is time to revisit and update the assessment of student performance, consistent with Van Slyke et al.'s (2023) recommendation to contextualize course assessments and activities. In fact, to motivate student learning in GenAl-integrated activities, Bowen and Watson (2024) suggested treating Al-generated work as grade C-level work, and students should do better than that.
For IS education, our study suggests that using ChatGPT enhances certain learning activities, such as coding debugging in a programming course, and that the benefits of ChatGPT may depend on the knowledge level of the students. As reflected in the remarks of the two IS students (see Section 4.4), ChatGPT responses were considered more suitable for students with adequate foundational knowledge in the database domain. This finding is echoed by Denny et al. (2024), who questioned the appropriateness of ChatGPT for beginners in computer programming courses. As they explained, novices usually start by learning simple programming concepts and patterns, gradually building their skills, but much of the vast quantity of code in the training data for the AI model was written by experienced developers, making Al-generated code sometimes too advanced or complex for novices to understand and modity. In this regard, when integrating ChatGPT into IS courses, instructors should consider the course level and the student knowledge base.
Based on our study findings and informed by Susarla et al. (2023), we provide the following general guidance for students to effectively use GenAl tools: (i) Students must develop a baseline knowledge necessary to create a meaningful prompt for retrieving relevant responses from GenAl on a study topic; (ii) students should consider the face validity of the GenAlgenerated output, e.g., are the outputs consistent with the student's understanding of the topic as described in the textbook? and (iii) students should use GenAl according to university and course policies. In a nutshell, learning is the mission of students; thus, they need to undertake learning activities responsibly when using GenAl (Deng & Joshi, 2024).
6. CONCLUSION
Analyzing survey data from a sample of 70 business college students in a U.S. university about their experiences with and reflections on their use of ChatGPT, this study addressed three research questions: (1) Are student prompts to ChatGPT influenced by instructor guidance during the learning process? (2) How does ChatGPT-generated information meet or fail to meet student expectations? (3) How does students" prior experience with ChatGPT relate to their expectations in this learning process? The findings for RQ1 demonstrate that the knowledge types of student-created ChatGPT prompts and the associated cognitive processes depend on the first practice guided by their instructors, especially among students without prior ChatGPT experience. In addition, the findings for RQ2 reveal that students are generally satisfied with the quality of the ChatGPT-generated information and find the tool helpful in their learning tasks. However, some students raised concerns about ChatGPT output, especially when a prompt involved a higher level of learning such as creating metacognitive knowledge. Finally, the findings for RQ3 suggest that students" prior experience with ChatGPT does not substantially affect their perceptions of their current ChatGPT experiences.
The study has limitations. First, it uses a convenience sample, not a representative sample, of the study population, Which could limit the generalizability of the findings. Second, the sample size is insufficient for advanced quantitative analysis of the relationships between multiple variables. Finally, the study is based on student perceptions and selfreported learning outcomes. Future research is suggested to use a large random sample and objective measures of student learning outcomes. It should also examine instructors' perspectives on using ChatGPT in higher education and compare them with students" perspectives. In addition, future research on experiential learning with GenAl should specify the learning objectives for each task and explore other factors that may affect student experiences in the learning process.
Informed by our findings and adapted from Van Dis et al. (2023), we suggest a set of questions for future research: (1) What are the different ways in which GenAl tools can assist educational activities? (11) What skills, knowledge, and abilities should students develop to effectively use GenAl in their learning? (iii) How should educators advance in their professional development to guide students in the era of GenAl? (iv) What polices should institutions develop and adopt policies to encourage responsible use of GenAl and ensure integrity in higher education?
GenAl's impact on the future of work and the workforce is going to be profound. Policymakers, researchers, educators, and technology experts need to work together and discuss how these evolving tools can be used safely and constructively to improve education. With the increasing penetration of Al across industries, higher education institutions must be ready to produce a workforce that meets the demands of the changing nature of work.
7. ACKNOWLEDGEMENTS
We appreciate the guidance by the editors and the constructive comments provided by the anonymous reviewers. We thank our research compliance officer Ms. Judith Aguirre for her assistance and appreciate our colleagues, Drs. Roger Qiyuan Jin and Xun (Peter) Xu, for their kind support in the data collection process. The second author would like to acknowledge the partial funding support from the grant awarded by the U.S. National Telecommunications and Information Administration (NTIA) for the project titled "Closing the Divide With CSUDH Workforce Integration Networks (CSUDH WIN)" (Grant Number: 06-09-C13005).
8. REFERENCES
Agresti, A. (1992). A Survey of Exact Inference for Contingency Tables. Statistical Science, 7(1), 131-153. https://doi.org/10.1214/ss/1177011454
Anderson, L., Krathwohl, D., & Bloom, B. (2001). 4 Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman.
Arif, T. B., Munaf, U., & Ul-Haque, I. (2023). The Future of Medical Education and Research: Is ChatGPT a Blessing or Blight in Disguise? Medical Education Online, 28(1), 2181052. https://doi.org/10.1080/10872981.2023.2181052
Asiri, A. (nd) Experiential Learning Theory.<https://opentext.wsu.edu/theoreticalmodelsforteachingandr esearch/chapter/experiential-learning-theory/
Baidoo-Anu, D., & Owusu Ansah, L. (2023, January 25). Education in the Era of Generative Artificial Intelligence (AID): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. SSRN. https://ssrn.com/abstract=4337484
Black, S. (2023, September 14). A Good Time to Be Working on Al: An Interview With Professor Sue Black. Digital Science. https://www.digital-science.com/tldr/article/agood-time-to-be-working-on-ai-an-interview-withprofessor-sue-black/
Braun, V., & Clarke, V. (2006). Using Thematic Analysis in Psychology. Qualitative Research in Psychology, 3(2), 77101. https://doi.org/10.1191/1478088706qp0630a
Bowen, J. A., & Watson, С. Е. (2024). Teaching With АГ: A Practical Guide to a New Era of Human Learning. Baltimore, MD: Johns Hopkins University Press. https://doi.org/10.56021/9781421449227
Chang, C. Y., Hwang, G. J., & Gau, M. L. (2022). Promoting Students' Learning Achievement and Self-Efficacy: A Mobile Chatbot Approach for Nursing Training. British Journal of Educational Technology, 53(1), 171-188. http://doi.org/10.1111/bjet.13158
Chen, L. (2022). Current and Future Artificial Intelligence (AT) Curriculum in Business School: A Text Mining Analysis. Journal of Information Systems Education, 33(4), 416-426.
Deng, X, & Chi, L. (2015). Knowledge Boundary Spanning and Productivity in Information Systems Support Community. Decision Support Systems, 80, 14-26. https://doi.org/10.1016/j.dss.2015.09.005
Deng, X., & Joshi, K. D. (2024). Promoting Ethical Use of Generative AI in Education. The Data Base for Advances in Information Systems, 55(3), 6-11. https://doi.org/10.1145/3685235.3685237
Denny, P., Kumar, V., & Giacaman, N. (2023). Conversing With Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. Proceedings of the 54th ACM Technical Symposium on Computer Science Education (pp. 1136-1142). New York, NY: ACM. https://doi.org/10.1145/3545945.3569823
Denny, P., Prather, J., Becker, B. A., Finnie-Ansley, J., Hellas, A., Leinonen, J., Luxton-Reilly, A., Reeves, В. N., Santos, Е. A., € Sarsa, S. (2024). Computing Education in the Era of Generative Al. Communications of the ACM, 67(2), 5667. https://doi.org/10.1145/3624720
Firat, M. (2023, January 12). How ChatGPT Can Transform Autodidactic - Experiences and Open Education? https://doi.org/10.31219/osf.i0/9ge8m
Forehand, M. (2010). Bloom's Taxonomy. In M. Orey (Ed.), Emerging Perspectives on Learning, Teaching, and Technology. https://textbookequity.org/Textbooks/Orey_Emergin Pers pectives_Learning.pdf
Hao, K. (2023, March 22). What Is ChatGPT? What to Know About the AI Chatbot - 2nd Update. The Wall Street Journal. https://www.ws]j.com/articles/chatgpt-ai-chatbotapp-explained-11675865177
Healey, J. F. (2021). Statistics: A Tool for Social Research and Data Analysis (11% ed.). Boston, MA: Cengage.
Kolb, D. A. (1984). Experiential Learning: Experience as the Source of Learning and Development. Upper Saddle River, NJ: Prentice-Hall.
Kolb, A. Y., & Kolb, D. A. (2017). Experiential Learning Theory as a Guide for Experiential Educators in Higher Education. Experiential Learning & Teaching in Higher Education, 1(1), 7-44. https://doi.org/10.46787/elthe.v1i1.3362
Lai, C.-H., Yang, J.-C., Chen, F.-C., Ho, C.-W., & Chan, T.-W. (2007). Affordances of Mobile Technologies for Experiential Learning: The Interplay of Technology and Pedagogical Practices. Journal of Computer Assisted Learning, 23(4), 326-337. https://doi.org/10.1111/j.13652729.2007.00237.х
Lee, Y. W., Strong, D. M., Kahn, В. K., & Wang, К. Y. (2002). AIMQ: A Methodology for Information Quality Assessment. Information & Management, 40(2), 133-146. https://doi.org/10.1016/S0378-7206(02)00043-5
Liu, M., Zhang, L. J., & Biebricher, C. (2024). Investigating Students" Cognitive Processes in Al-Assisted Digital Multimodal Composing and Traditional Writing. Computers & Education, 211, 104977. https://doi.org/10.1016/j.compedu.2023.104977
Memmert, L., Tavanapour, N., & Bittner, E. (2023). Learning by Doing: Educators" Perspective on an Illustrative Tool for Al-Generated Scaffolding for Students in Conceptualizing Design Science Research Studies. Journal of Information Systems Education, 34(3), 279-292.
Milmo, D. (2023, February 2). ChatGPT Attracts 100 Million Users: OpenAl's Fastest-Growing App. The Guardian. https://www.theguardian.com/technology/2023/feb/02/cha tept-100-million-users-open-ai-fastest-growing-app
Nargundkar R. (2008). Marketing Research: Text and Cases, (3"4 ed.). New Delhi: Tata McGraw Hill.
Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). HumanAT Collaboration Patterns in Al-Assisted Academic Writing. Studies in Higher Education, 49(5), 847-864. https://doi.org/10.1080/03075079.2024.2323593
Nithithanatchinnapat, B., Maurer, J., Deng, X., and Joshi, K. D. (2024). Future Business Workforce: Crafting a Generative Al-Centric Curriculum Today for Tomorrow's Business Education. 7he Data Base for Advances in Information Systems, 55(1), 6-11. https://doi.org/10.1145/3645057.3645059
O'Brien, M. A., Rogers, W. A., € Fisk, A. D. (2012). Understanding Age and Technology Experience Differences in Use of Prior Knowledge for Everyday Technology Interactions. ACM Transactions on Accessible Computing, 4(2), 1-27. https://doi.org/10.1145/2141943.2141947
OpenAl (2022, November 30). Introducing ChatGPT. https://openai.com/blog/chatgpt
Rodrigues, C. A. (2004). The Importance Level of Ten Teaching/Learning Techniques as Rated by University Business Students and Instructors. Journal of Management Development, 23(2), 169-182. https://doi.org/10.1108/02621710410517256
Saunders, M., Lewis, P. & Thornhill, A. (2012). Research Methods for Business Students (6™ ed.). Pearson Education Limited.
Stebbins, R. (2001). Exploratory Research in the Social Sciences. Thousand Oaks, CA: SAGE. https://doi.org/10.4135/9781412984249
Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. Information Systems Research, 34(2), 399-408. https://doi.org/10.1287/isre.2023.ed.v34.n2
Terwiesch, C. (2023). Would Chat GPT3 Get a Wharton MBA? A Prediction Based on lts Performance in the Operations Management Course. University of Pennsylvania.
Van Dis, E., Bolen, J., van Rooij, R., Duidema, W., & Bockting, C. L. (2023). ChatGPT: Five Priorities for Research. Nature, 614, 224-226. 00288-7 Van
Slyke, C., Johnson, К. D., Sarabadani, J. (2023). Generative Artificial Intelligence in Information Systems Education: Challenges, Consequences, and Responses. Communications of the Association for Information Systems, 53, 1-21. https://doi.org/10.17705/1CAIS.05301
Yilmaz, R., & Yilmaz, F. G. K. (2023). The Effect of Generative Artificial Intelligence (AI)-Based Tool Use on Students' Computational Thinking Skills, Programming Self-efficacy and Motivation. Computers and Education: Artificial Intelligence, 4, 100147. https://doi.org/10.1016/j.caeai.2023.100147
Copyright EDSIG Winter 2025