Content area
Concerns about the spread and adoption of misinformation abound, and academic librarians have played a part in trying to stem the tide through information literacy instruction. However, teaching students how to evaluate sources can be complicated-teaching fact-checking skills may be insufficient if it increases students' overall cynicism about information ecosystems. This study explores how teaching fact-checking and lateral reading skills, along with instruction about "bias filters," can help to reduce the cynicism of first year writing students, while also increasing their misinformation detection skills. Results are mixed, but teaching about the information creation process and "bias filters" is especially promising. The authors also recommend faculty-librarian collaborations as an effective strategy for teaching students how to evaluate sources.
Abstract
Concerns about the spread and adoption of misinformation abound, and academic librarians have played a part in trying to stem the tide through information literacy instruction. However, teaching students how to evaluate sources can be complicated-teaching fact-checking skills may be insufficient if it increases students' overall cynicism about information ecosystems. This study explores how teaching fact-checking and lateral reading skills, along with instruction about "bias filters," can help to reduce the cynicism of first year writing students, while also increasing their misinformation detection skills. Results are mixed, but teaching about the information creation process and "bias filters" is especially promising. The authors also recommend faculty-librarian collaborations as an effective strategy for teaching students how to evaluate sources.
Keywords: information literacy, composition studies, misinformation, source evaluation, bias, higher education, instruction
Since the rise of national attention on "fake news" and misinformation, many-both within and outside of librarianship-have declared that library instruction and information literacy are antidotes to the onslaught of inaccurate information (Becker, 2016; De Paor & Heravi, 2020; Gibson & Jacobson, 2018). As students find support for their assignments on the mostly unchecked and ungoverned internet, it is up to them to evaluate each source they use for quality and accuracy. Academic librarians have long played a role in helping students evaluate information sources, often in partnership with faculty (Burton & Chadwick, 2000). However, when teaching students about the nuances of source authority, it can be challenging to describe the flaws in various sources-for example, the bias that still exists in academic sources-without increasing students' mistrust of institutions like news media and science.
When students are overly cynical about information sources, they can fall into conspiracy theory thinking (Harford, 2021), which encourages the idea that powerful groups of individuals, often associated with conventional authorities, have carried out significant events secretly (Brotherton & French, 2014). Once people adopt this way of thinking, it can be very challenging to correct. More research is needed to explore library instruction interventions and their ability to help students avoid or better evaluate misinformation while avoiding cynicism, and our study responds to this current gap in scholarship.
The purpose of this research is to explore the effectiveness of two methods of teaching source evaluation in a college writing classroom and whether these methods impact student responses to misinformation. This study took place in a College Writing II (ENG 102) class, the second required writing course for all college freshmen at our institution. The course focuses on teaching students research literacy skills. The first source evaluation method explored in this study is library instruction related to lateral reading and fact-checking techniques that can be used to check for misinformation. The second teaching method is the use of "bias filters," a term and concept introduced by the researchers, as well as an exploration of Wikipedia. The purpose of this source evaluation instruction is to give students the confidence to trust authoritative sources of information balanced with appropriate levels of skepticism. The goal of our study was to identify if these interventions are successful in improving student identification of misinformation and attitudes about source authority and, therefore, should be more widely adopted in library instruction. The research questions for this study are as follows:
o RQ1: Does a library information literacy instructional intervention centered on lateral reading help students in a College Writing II class better evaluate sources?
o RQ2: Does a library information literacy instructional intervention centered on the exploration of "bias filters" using Wikipedia as an example decrease student cynicism about authoritative information sources?
We argue that these results are useful to others who teach information literacy concepts, such as first-year writing and composition instructors.
Literature Review
In a time of increasing concern about the spread of misinformation, especially with the proliferation of generative AI tools such as ChatGPT and Microsoft CoPilot, teaching students source evaluation skills applicable in various learning contexts is critical. However, while the problem of misinformation is legitimate and concerning, educators also need to take care not to encourage student cynicism and general disengagement with authoritative information sources.
Today, some Americans subscribe to conspiracy theories that many find implausible or even harmful. These range from beliefs in a flat Earth to fears of microchip implantation via COVID-19 vaccines and even claims of government officials and celebrities consuming children's blood for immortality (Beene & Greer, 2021). While dismissing these believers as simply gullible may be tempting, scholars suggest a more complex underlying issue: a fundamental distrust in truth-establishing institutions (Harford, 2021). Conspiracy theorists often reject mainstream academic research, government sources, and media outlets, viewing them as a monolithic, unreliable entity (Fister, 2018). They tend to focus on gaps in knowledge and uncertainties, prioritizing "negative evidence" over established facts, such as focusing on the lack of stars appearing in the background of photographs of the first moon landing, rather than acknowledging the abundance of evidence for the moon landing, like the 842 pounds of moon rocks that NASA returned to Earth (Brotherton et al., 2013). This skepticism towards well-supported information makes it particularly challenging to change the minds of those deeply entrenched in conspiracy beliefs.
Humans generally find distrust more natural than trust (Berg et al., 1995). While a healthy dose of skepticism is crucial for critical thinking and higher education, excessive skepticism can evolve into a pessimistic worldview (Fister, 2018). This inherent tendency towards doubt can lead people to question even traditionally reliable information sources, including academic research, scientific findings, and reputable news outlets (Toff et al., 2021). For individuals to place faith in authorities, they must acknowledge experts' credibility and be willing to defer to their judgment in personal decision-making (Beene & Greer, 2021; Gibson & Jacobson, 2018). This trust becomes increasingly difficult to maintain in light of past errors or fraudulent behavior within expert communities, external interference from entities with vested interests, and a natural inclination to avoid uncomfortable or unwelcome information (Rynes et al., 2018). Ultimately, the challenge lies in striking a balance between necessary skepticism and the trust required for a well-functioning society.
Any discussion of source evaluation and trust must consider members of marginalized groups who may have good cause to question institutions that systematically oppress them. For these students, a lack of trust may be justified. Discussions about source evaluation and authority should then, whenever possible, acknowledge the systemic injustice in even the most carefully reviewed sources and invite candid conversation about navigating information created by flawed human beings.
In this challenging context, academic librarians and educators recognize the importance for students to be able to find and evaluate information sources for their accuracy and quality. However, students who are unfamiliar with the rhetorical genre of academic writing may not be aware of community expectations regarding the use of authoritative evidence to support their own rhetorical claims. Additionally, the criteria that make a source authoritative within that context might be mysterious to them (Jastram et al., 2021). When asked to articulate the quality of a source, students will often rely on shallow measures of authority, such as whether the source contains statistics (charts), facts, or opinions, or whether the source has a .org or .com website URL (Angell & Tewell, 2017; Silva et al., 2018). Student source evaluation also tends to weigh the accessibility of sources (whether they are free and easy to access) more heavily than a librarian or researcher might (Burton & Chadwick, 2000; Kim et al., 2011, Rowley et al., 2015). In general, students are motivated to find a good enough source as quickly as possible to fulfill the requirements of the assignment, engaging in what is often called "satisficing" (Connaway et al., 2011; Warwick et al, 2009).
Students may even struggle to identify what kind of source they are looking at; they might categorize any online source as a website, including such varied sources as blog posts, news articles from standards-based news organizations, parody works, or e-book chapters (Jastram et al., 2021). This "container collapse," as described by Buhler et al. (2019), is exacerbated by generative Al tools, which remove even superficial contextual clues that help students determine what kind of source they are evaluating.
Librarians have developed many tools to both assess and teach source evaluation using checklists and rubrics. For example, Daniels (2010) developed a rubric for assessing a student's ability to evaluate the credibility of a source with categories ranging from "Does not address credibility at all" to "Identifies credibility cues, interprets those cues, including how the cues affect their understanding of the information source within the context of the topic being researched" (p. 39). Academic librarians have been teaching students source evaluation skills for decades using checklist tools like the CRAAP test (Blakeslee, 2004; Meriam Library, 2010). Other acronyms such as CARS (Harris, 2020), RADAR (Mandalios, 2013), 6QW (Radom & Gammons, 2014; Lowe et al, 2021), and CCOW (Chyne et al, 2023) have also emerged.
However, these somewhat outdated and overly simplistic checklist approaches (particularly the CRAAP test) have recently come under criticism in terms of their effectiveness in addressing modern examples of poor-quality information (Breakstone et al., 2018; Caulfield, 2018; Fielding, 2019; Lowe et al, 2021; Meola, 2004; Ostenson, 2014; Vamanu & Zak, 2022). Researchers point out that the CRAAP test was developed when the internet was much newer and online texts could be evaluated similarly to print ones (Valenza, 2020). In addition, the CRAAP test is comprehensive when completed correctly, which could result in cognitive overload (Bull, 2021) and, at the same time, simplistic and binary in its criteria (Seeber, 2017). Research has shown that the CRAAP test may encourage students to rely on superficial source evaluation criteria, such as the appearance of a website or its URL domain (Lowe et al., 2021). For these reasons, some researchers argue that today's information environment renders the CRAAP test unsuitable for efficient, effective source evaluation.
Developed by Caulfield (2019), the SIFT method represents one proposed alternative to traditional checklist approaches of source evaluation. The SIFT method arguably does a better job of helping students evaluate information in today's online landscape by offering more efficient, straightforward, and applicable strategies in a wide variety of information settings (Bull, 2021). A key component of Caulfield's approach is the use of "lateral reading," which involves leaving the source that is being evaluated and opening new browser tabs to investigate what other internet sources report about the site and its claims (Wineburg & McGrew, 2019). Research has shown that the SIFT method, especially when it incorporates lateral reading, results in more accurate student source evaluation (Bobkowski & Younger, 2020; Breakstone et al, 2021; Brodsky et al, 2021).
One could argue that SIFT helps students decide what level of evaluation is needed based on the information context. However, one limitation of the SIFT method is that it does not explain to students why some sources are more reliable and accurate than others. In addition, as Bull (2021) pointed out, SIFT is a reactionary approach. It does not encourage students to ask what information or sources are missing, what voices or perspectives are unrepresented, or how the information affects them (e.g., what existing beliefs or knowledge are activated by their interaction with this new information).
While it is difficult to fit neatly into a one-shot instruction session, academic librarians explore source evaluation approaches that call for more advanced critical thinking skills. For example, librarians use the ACRL frame "Authority Is Constructed and Contextual" to encourage students to consider the ways context and society shape our judgments of source authority, which can be a useful framework for students to consider a source's credibility (Bobkowski & Younger, 2020). However, this approach risks leading students to conclude that no objective criteria exist for evaluating source authority, potentially causing them to rely solely on intuition and personal beliefs when selecting sources (Bluemle, 2018; Rose-Wiles, 2024; Saunders & Budd, 2020). While acknowledging that no source is flawless, it is crucial to emphasize that some sources are backed by more substantial and higher-quality evidence than others, which is a key factor in assessing authority.
Although less commonly used, the ACRL frame "Information Creation as a Process" has considerable potential to serve as a guide for student source evaluation (Albert et al., 2020; Curtis, 2018; Harmer et al., 2017; Scull, 2019). One important potential outcome of students' increased understanding of how information sources are created is a more accurate and generous assignment of credibility to traditionally authoritative sources, such as standards-based news, scholarly research, and government documents. Credibility judgments are important because they are a necessary precursor to trust (Rowley et al., 2015). To foster more sophisticated judgments about source credibility, educators can focus on teaching students how sources gather evidence and what mechanisms are in place to minimize bias.
This approach can help students develop a more nuanced understanding of information authority and make more informed decisions when evaluating sources.
Wikipedia represents a specific instructional opportunity that can help students understand how sources create information and, therefore, how to spend trust wisely. Librarians have explored many strategies for incorporating Wikipedia into information literacy instruction, from helping students gain foundational background knowledge that will help them build an argument (Calhoun, 2014; Jastram et al., 2021) to having them author or edit Wikipedia entries (Dawe & Robinson, 2017; Kahili-Heede et al., 2022). However, students frequently claim that Wikipedia is not credible, and even though they often use it, they do not consider it a trustworthy source (Angell & Tewell, 2017). Helping students understand how Wikipedia entries are created and corrected has the potential to help them better decide when its use is appropriate and to demonstrate why the way that information is created has a significant effect on how we should judge its credibility.
Method
The researchers consist of the Performing Arts and Humanities Librarian (Goodsett) and a First Year Writing Instructor (Gagich) at Cleveland State University (CSU). The participants in this IRB-approved study were students enrolled in two sections of Gagich's fall 2022 semester ENG 102 course {N= 47). Students who had already completed the ENG 102 course at CSU were excluded because they were likely to have already experienced college-level library instruction, impacting their survey responses.
Course Context and Undergraduate Student Demographics
ENG 102, or College Writing II, is part of a two-class First-Year Writing Program requirement at our four-year institution. The writing program is part of the university's general education courses. Students are placed into either ENG 100: Intensive College Writing, ENG 101: Introduction to College Writing, or ENG 102: College Writing II based on their SAT/ACT scores or their score on a writing placement survey. Most students enter ENG 102 after passing ENG 100 or ENG 101 with a grade of C or higher. The primary goals of ENG 102 include teaching students information literacy, independent research, and argumentative writing skills {First-Year Writing Program Department of English Cleveland State University Faculty Handbook, 2016-2017, 2016).
According to IPEDS Data Feedback Report 2023, created by the Institute of Education Sciences, CSU's total enrollment in fall 2022 was 14,385. Within this population, 54% identified as White, 15% U.S. nonresident, 14% Black/African American, 7% Hispanic/Latino, 4% Asian, 3% two or more races, and 3% unknown (Institute of Education Sciences, 2023). However, this demographic information does not thoroughly describe each participant's social context, individual culture, and/or history (Smagorinsky, 2008). Students in the two sections of ENG 102 were seeking a variety of degrees: Science (16), Business (14), Liberal Arts and Social Sciences (7), University Studies (4), Urban Affairs (3), Education (2), Nursing (1), and College Credit Plus (1).
Library Information Literacy Instructional Intervention
The library sessions took place in two face-to-face ENG 102 sessions in fall 2022. The rooms were equipped with an instructor computer and projector, and students were required to bring their own laptop or tablet to all sessions that semester. Students who could not or did not want to bring their own laptops or tablets could rent one for free from the university. While Goodsett provided the content for each session, Gagich monitored the room and answered questions. Both class sections of ENG 102 received the same instruction.
In the first session, Goodsett introduced students to the concept of misinformation and showed them some real-world examples. Then, she shared some simple fact-checking skills with the students based on the SIFT method. For each skill, the librarian demonstrated the fact-checking strategy and invited the students to try with an example. The lesson emphasized lateral reading, which involves leaving the source being evaluated to see what other reliable sources have to say about it (Wineburg & McGrew, 2019).
At the end of the session, Goodsett briefly introduced students to the idea of "bias filters," or systematic strategies for information producers to reduce bias in their final products. Bias filters can include IRB review, peer review, and diversity in voices/sources. While bias filters can help reduce bias in the final product, they cannot entirely eliminate the bias in information sources created by humans, who are influenced by social context, values, beliefs, and experiences. However, sources with more bias filters are more likely to be reliable as evidence.
In the second session, Goodsett reintroduced bias filters by discussing the difference between scholarly and popular sources and encouraging students to consider what filters are likely present for each. Then she introduced Wikipedia as a source that has very explicit bias filters but still has some flaws. The students examined the strengths and weaknesses of Wikipedia and then formed small groups to evaluate a Wikipedia entry using what they had learned.
Survey Design and Distribution
Goodsett administered a pre-survey during class time before beginning the library instruction, and Gagich administered a post-survey at the end of the semester. Both surveys were delivered electronically and students completed them on their own device. The pre-and post-survey were almost identical; the pre-survey asked students, "Have you received instruction about how to evaluate sources?" and the post-survey asked students, "Have you received instruction about how to evaluate sources before this semester?" to account for the library instruction they had received from Goodsett that semester, but all other questions were the same.
The pre-survey (see Appendix A) included four open- and closed-ended questions asking students about their experiences with library instruction for evaluating sources and writing research papers. The second section of the survey asked students to answer five qualitative questions about source evaluation and to respond to seven Likert scale statements. The final section included two source evaluation scenarios. Table 1 shows how these survey sections relate to the research questions.
We independently analyzed post-session survey responses for each student and identified themes and patterns. Then, we used inter-rater reliability methods to compare results.
Limitations
One limitation of this study was our inability to link the pre- and post-survey results to specific students. The pre- and post-survey should have allowed students to provide their names or another unique identifier so that we could explore changes in their scores and perceptions over the course of the semester. Further, the design of the survey's website evaluation activity was problematic because it did not have a clickable link for the website. Students would have needed to copy/paste the link to visit the site. This design, while acceptable, may not have been the ideal way to measure students' source evaluation skills due to the additional effort required to access the source they were evaluating. In addition, the library intervention involved a number of strategies, including the SIFT method, lateral reading, bias filters, and Wikipedia as tools used during the sessions. Any results could be attributed to only a subset of these strategies. Lastly, the sample size is small and only included students in Gagich's ENG 102 classes. To improve the generalizability of results, any follow-up study should include a larger and more diverse study population.
Results
Of the 47 potential study participants, 42 attended the library instruction sessions and completed the study surveys. However, three participants indicated that they had taken ENG 102 previously. Their responses were excluded from analysis, leaving 39 responses.
Based on the post-survey results, 59% of students reported that they had instruction related to source evaluation in the past, 26% reported that they had not, and 15% thought that maybe they had had instruction. Among those who had, most had been instructed to look at author credentials, evidence (e.g., sources cited), presence of bias, or the publisher of the information when evaluating sources.
One question in the pre- and post-survey asked students to indicate how much they agreed with a set of statements about their trust in authoritative information sources, such as standards-based news and science, using a Likert scale from strongly disagree to strongly agree. Comparing the percentage of strongly agree and agree responses between the pre-and post-survey responses, students showed a reduced level of cynicism in all responses except for one statement related to trust that stayed the same ("I believe most news that I watch is trustworthy" 9%). Specifically, students showed a reduction in adherence to statements aligned with high cynicism, such as "I do not trust most things I read online"
(21% in the pre-survey and 7% in the post-survey) and "I believe groups of scientists fabricate or suppress evidence to deceive the public" (9% to 3%).
In addition to the Likert scale statements, the post-survey asked students to respond to seven open-ended questions; all 39 students answered them. One question in the pre- and post-surveys asked students to define both the term "bias" and the term "misinformation." Looking just at post-survey data, about 60% of the students did not accurately define bias based on the definition presented in the library instruction session, while about 8% failed to accurately define misinformation.
The researchers also examined responses to the post-survey question, "What criteria are you using now to determine the credibility of a source?" Students had a wider variety and larger quantity of credibility measures than in the question about prior source evaluation instruction, although their responses were somewhat similar. The criterion mentioned most often was the author's credentials, followed by the evidence presented, content currency, source type, and whether it had undergone peer review. Students also mentioned criteria such as the structure of the source, the methods used by the authors, tone ("is it boring?"), and the presence of visuals. Two students mentioned the use of lateral reading as well.
Because an important element of the intervention was instruction about source types, the post-survey asked students how they determine what type of source they are reading. Students primarily relied on the presence or absence of visuals, the structure of the source, author credentials, the presence of an abstract, and the kind and placement of evidence (e.g., citations throughout or at the end of the article). Students also mentioned many other concepts discussed in class, including tone and presence of jargon, length of the source, and whether it was peer-reviewed.
The post-survey ended with two scenarios that asked students to apply what they had learned. The first scenario asked students to help a fictional student evaluate the American College of Pediatricians website (a Southern Poverty Law Center-designated hate group) and determine if the website was reliable. Unfortunately, 62% of the students failed to correctly identify the website as an unreliable source. Of the students who did advise the fictional student not to rely on the website, many gave superficial, irrelevant reasons for not using it, such as "it isnt [sic] very informative," "it does not give enough information about the topic, have [sic] no abstract," and "I would recommend her not to use this source because it seems like a popular source."
The second scenario described a fictional student who is cynical and distrustful of all news sources. The scenario directed students to respond to the student, describe how news sources' quality varies, and outline how they would find a reliable source. Fortunately, 83% of student respondents to the post-survey did a good or acceptable job of refuting the claim that "all news is garbage" and giving the student good advice for finding reliable news sources.
Student post-survey responses generally demonstrated an impressively nuanced stance regarding bias in sources. One student pointed out, "Every paper is slightly bias [sic] because of human nature but at least we can get a slightly bias [sic] paper but with the information to support the idea." Another student explained, "Some news outlets broadcast accurate information that's supported by evidence. There are news sources that produce biased opinions and share misinformation as a way to either entertain or manipulate, but generally, news sources are meant to share factual data that can [be] easily interpreted (weather, breaking news, election data, etc.)." A third student did a great job demonstrating an understanding of how a source's creation process influences its quality: "I agree, but I believe there are ways to find unbiased news. The quality of the source came from the author when it was made and ultimately how it is set up." This comment demonstrates the student's understanding that evaluating a source often requires exploring things that happened before it was published, such as how the information creation process is "set up."
Discussion
One of the key findings of this exploratory study is the notion that there should be more coordination and time spent between teacher-librarian collaborators. The authors decided to work on this project together because of their positive past experiences working together on instruction and research. The authors' productive professional friendship also facilitated the process of co-designing the library intervention and student assessments, creating a positive teaching experience in the classroom for both the instructors and the students.
Although the authors have worked together over the years, there were still disconnections related to how each described certain terms, such as "bias." While Goodsett visited the ENG 102 classroom more often than the librarian's typical one visit, Gagich, as the instructor, was the more influential voice when students thought about terms such as "bias" and "misinformation." This misalignment in terminology may explain why about 60% of the students did not accurately define bias. Therefore, we suggest that, while potentially time-consuming, teacher-librarian collaborators should set up sessions prior to the beginning of a project (and ideally before the beginning of the semester) to discuss and decide upon terms and definitions.
Research Question 1
This study attempted to answer two research questions. The first was, "Does a library information literacy instructional intervention centered on lateral reading help students better evaluate sources?"
Based on student responses, there was a possible correlation between library instruction and proficiency in source evaluation. Students described using more source evaluation criteria than they had learned about in previous instruction. They also identified many qualities that differentiate scholarly and popular sources, such as source structure, tone, presence and location of citations, and presence of visuals. Students' post-survey definitions of the concept of "misinformation" generally demonstrated understanding. However, few seemed to rely on the lateral reading skills demonstrated in the library instruction session when asked to evaluate a source, resulting in poor source evaluation conclusions.
This result could stem partly from the framing of the lateral reading task in the post-survey. Students received the information about the source to evaluate via a form and would have needed to leave the form to conduct lateral reading, which they might have felt hesitant to do. Additionally, as noted earlier, Gagich did not explicitly use the term "lateral reading" when instructing students on evaluating information and sources, highlighting the need for clarification of terms across disciplines in interdisciplinary partnerships such as this one. Despite their poor performance on the lateral reading scenario, other evidence, including student feedback at the end of the course, showed that the class session was both interesting and effective.
Research Question 2
The second research question this study attempted to answer was, "Does a library information literacy instructional intervention exploring 'bias filters' using Wikipedia as an example decrease student cynicism about authoritative information sources?"
While results were mixed, the intervention seemed to be more successful in addressing student cynicism. Though the quantitative results only report an average of student responses from the pre- and post-survey, due to a flaw in our data collection, students' responses to the qualitative questions in the Evaluating Sources section support our argument that an information literacy instructional intervention helped students in these two College Writing II classes better evaluate sources. Students were markedly less cynical about the authority of scientists and news sources after the library sessions, and many rejected the fictional student's strong cynicism in the scenario. Their responses showed a nuanced understanding of the presence of bias in news sources, and, in general, they did not dismiss all news sources as untrustworthy. Student distrust and conspiracy theory thinking are dangers that all source evaluation instruction must be cautious of. Given that, we were buoyed that teaching about bias filters may have led to students' heightened sense of the reliability of more authoritative sources. At the same time, study participants also struggled to identify authoritative sources, complicating this result. Future library instruction interventions could pair instruction that improves students' attitudes about authoritative sources, such as learning about bias filters, with clear, repetitive exercises that help them to identify authoritative sources more effectively, such as lateral reading. While there has been a fair amount of research about the successful use of lateral reading in the classroom, more research into the benefits of using bias filters as a concept to teach source evaluation is needed.
Conclusions and Future Research
Overall, a library information literacy intervention using the bias filters concept and teaching lateral reading techniques may have contributed to improving students' source evaluation skills. In particular, the use of a bias filters framework may have minimized some students' cynicism about authoritative sources such as scientific research and standards-based news organizations. We recommend that librarians and instructors who teach source evaluation consider pairing lateral reading exercises with a broader discussion about how information sources are made. That way, students gain practical source evaluation skills and knowledge that might impact their disposition to use these skills in relevant contexts. As explored in the literature review, the threat of student cynicism is real and should not be discounted. See Appendix B for openly licensed slides used in this study that others can use and modify for their own teaching.
The collaborative planning and execution of these information literacy sessions proved especially successful, and we recommend future librarian-instructor collaborations explore the complex topic of source evaluation. This allows the librarian to spend more time working with students and the librarian and instructor to combine their expertise to the challenge of teaching source evaluation. When both parties discuss the lesson's objectives and explore in advance what teaching strategies will match the tenor of the overall class, the library instruction classroom experience is smoother and richer for all involved. The positive collaborative experience of the intervention described here has led the researchers to plan additional pedagogy-based research projects together.
Future research into encouraging students to make lateral reading a habit may be valuable so that, beyond understanding the concept, they employ it as an automatic first step. In addition, further exploration of student attitudes toward traditionally authoritative sources of information could help librarians and instructors better understand how to encourage more nuanced source evaluation approaches. In particular, the concept of bias filters is worth researching further. We recommend that future researchers explore each of the library instruction strategies described here to better understand their impact on student attitudes and performance. The prevalence of misinformation and the lack of trust in authoritative information is unlikely to improve in the near future. Teaching and promoting tools and strategies for helping students spend their trust wisely are essential to the continued functioning of a democratic society.
References
Albert, A. B., Emery, J. L., & Hyde, R. C. (2020). The proof is in the process: Fostering student trust in government information by examining its creation. Reference Services Review, 48(1), 33-47. https://doi.org/l0.1108/RSR-09-2019-0066
Angell, K., & Tewell, E. (2017). Teaching and un-teaching source evaluation: Questioning authority in information literacy instruction. Communications in Information Literacy, 11(1), 95-121. https://doi.Org/l0.15760/comminfolit.2017.ll.l.37
Becker, B. W. (2016). The librarian's information war. Behavioral & Social Sciences Librarian, 35(4), 188-191. https://doi.org/l0.1080/01639269.2016.1284525
Beene, S. & Greer, K. (2014). A call to action for librarians: Countering conspiracy theories in the age of QAnon. The Journal of Academic Librarianship, 47(1). Article 102292. https://doi.Org/lO.1016/i.acalib.2020.102292
Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122-142. https://doi.org/l0.1006/game.1995.1027
Blakeslee, S. (2004). The CRAAP test. LOEX Quarterly, 31(3), 6-7. https://commons.emich.edu/loexquarterly/vol3 l/iss3/4
Bluemle, S. R. (2018). Post-facts: Information literacy and authority after the 2016 election. portal: Libraries and the Academy, 18(2), 265-282. https://doi.org/l0.1353/pla.2018.0015
Bobkowski, P. S., & Younger, K. (2020). News credibility: Adapting and testing a source evaluation assessment in journalism. College & Research Libraries, 81(5), 822-843. https://doi.org/lO.5860/crl.8L5.822
Breakstone, J., McGrew, S., Smith, M., Ortega, T., & Wineburg, S. (2018, March). Why we need a new approach to teaching digital literacy. Phi Delta Kappan, 99(6), 27-32. https://doi.org/l0.1177/0031721718762419
Brodsky,J. E., Brooks, P. J., Scimeca, D., Todorova, R., Galati, P., Batson, M., Grosso, R., Matthews, M., Miller, V., & Caulfield, M. (2021). Improving college students' fact-checking strategies through lateral reading instruction in a general education civics course. Cognitive Research:Principles and Implications, 6, Article 23. https://doi.org/l0.1186/s41235-021-00291-4
Brotherton, R., & French, C. C. (2014). Belief in conspiracy theories and susceptibility to the conjunction fallacy. Applied Cognitive Psychology, 28(2), 238-248. https://doi.org/l0.1002/acp.2995
Brotherton, R., French, C. C, & Pickering, A. D. (2013). Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in Psychology, 4(12), Article 279. https://doi.org/l0.3389/fpsyg.2013.00279
Buhler, A. G., Faniel, I. M., Brannon, B., Cyr, C, Cataldo, T. T., Connaway, L. S., Valenza, J. K., Elrod, R, Graff, R A., Putnam, S. R, Hood, E. M., & Langer, K. (2019). Container collapse and the information remix: Students' evaluations of scientific research recast in scholarly vs. popular sources. In D. M. Mueller (Ed.), Recasting the narrative: The Proceedings of the ACRL 2019 conference, April 10-13, 2019, Cleveland, OH (pp. 654-667). Association of College and Research Libraries. https://alair.ala.org/server/api/core/bitstreams/ea90b5a3-82fe-429e-9dfl-47623045603b/content
Bull, A. C. (2021). Dismantling the evaluation framework. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/202l/dismantling-evaluation/
Burton, V. T., & Chadwick, S. A. (2000). Investigating the practices of student researchers: Patterns of use and criteria for use of internet and library sources. Computers and Composition, 17(3), 309-328. https://doi.org/l0.1016/S8755-4615(00)00037-2
Calhoun, C. (2014). Using Wikipedia in information literacy instruction: Tips for developing research skills. College & Research Libraries News, 75(1), 32-33. https://doi.org/l0.5860/crln.75.L9056
Caulfield, M. (2018, September 15). A short history of CRAAP. Hapgood. https://hapgood.us/2018/09/14/a-short-history-of-craap/
Caulfield, M. (2019, June 19). SIFT (The Four Moves). Hapgood. https://hapgood.us/2018/06/19/sift-the-four-moves/
Chyne, R. C, Khongtim, J., & Wann, T. (2023). Evaluation of social media information among college students: An information literacy approach using CCOW. The Journal of Academic Librarianship, 49(5), Article 102771. https://doi.Org/lO.1016/j.acalib.2023.102771
Connaway, L. S., Dickey, T. J., & Radford, M. L. (2011). "If it is too inconvenient I'm not going after it:" Convenience as a critical factor in information-seeking behaviors. Library & Information Science Research, 33(3), 179-190. https://doi.Org/lO.1016/j.lisr.2010.12.002 Curtis, N. R. (2018). One journal issue, two activities, three views: Information creation as a process [Conference presentation]. Special Libraries Association Annual Conference, Baltimore, MD, USA. https://digitalcornmons.librarv.umaine.edu/lib_staffpub/28/ Daniels, E. (2010). Using a targeted rubric to deepen direct assessment of college students' abilities to evaluate the credibility of sources. College & Undergraduate Libraries, 17(1), 31-43. https://doi.org/l0.1080/l0691310903584767
Dawe, L., and Robinson, A. (2017) Wikipedia editing and information literacy: A case study. Information and Learning Science, 118(1/2), 5-16. https://doi.org/lO.1108/ILS-09-2016-0067
De Paor, S. & Heravi, B. (2020). Information literacy and fake news: How the field of librarianship can help combat the epidemic of fake news. The Journal of Academic Librarianship, 46(5). Article 102218. https://doi.Org/lO.1016/j.acalib.2020.102218
Fielding, J. A. (2019). Rethinking CRAAP: Getting students thinking like fact-checkers in evaluating web sources. College & Research Libraries News, 50(11), 620-622. https://doi.org/l0.5860/crln.80.ll.620
First-Year Writing Program Department of English Cleveland State University Faculty Handbook 201&2017. (2016). Cleveland State University.
Fister, B. (2018, February 27). From schooled skepticism to informed trust. Inside Higher Ed. https://www.insidehighered.com/blogs/librarv-babel-fish/schooled-skepticism-informed-trust
Gibson, C. & Jacobson, T. E. (2018). Habits of mind in an uncertain information world. Reference & User Services Quarterly, 57(3), 183-92. https://doi.Org/lO.5860/rusq.57.3.6603
Harford, T. (2021, March 16). What conspiracy theorists don't believe. The Atlantic. https://www.theatlantic.com/ideas/archive/202l/03/the-conspiracv-theorists-problem-isnt-what-thev-believe/618285/
Harmer, A. B., Havas, B., Lee, P., & Minchew, D. (2017). Blind taste test: Helping first-year STEM students understand "Information Creation as a Process." In R. Pun & M. Houlihan (Eds.), The first-year experience cookbook (pp. 77-79). Association of College and Research Libraries. http://hdl.handle.net/l0675.3/610745
Harris, R. (2020, October 19). Evaluating internet research sources. VirtualSalt. https://www.virtualsalt.com/evaluating-internet-research-sources/ Institute of Education Sciences (2023). IPEDS Data Feedback Report 2023. https://nces.ed.gov/ipeds/dfr/2023/ReportHTML.aspx?unitId=202134
Jastram, I., Peterson, C, & Scharf, E. (2021). Source evaluation: Supporting undergraduate student research development. In the Library with the Lead Pipe. https://www.inthelibrarvwiththeleadpipe.org/202l/source-evaluation/
Kahili-Heede, M. K., Patil, U., Hillgren, K. J., Hishinuma, E., & Kasuya, R (2022). Library instruction and Wikipedia: Investigating students' perceived information literacy, lifelong learning, and social responsibility through Wikipedia editing. Journal of the Medical Library Association: JMLA, 110(2), 174-184. https://doi.org/l0.5195/jmla.2022.1291
Kim, K. S., Yoo-Lee, E., & Joanna Sin, S. C. (2011). Social media as information source: Undergraduates' use and evaluation behavior. Proceedings of the American Society for Information Science and Technology, 48{i), 1-3. https://doi.org/l0.1002/meet.2011.14504801283
Lowe, M. S., Macy, K. V., Murphy, E., & Kani, J. (2021). Questioning CRAAP: A comparison of source evaluation methods with first-year undergraduate students. Journal of the Scholarship of Teaching and Learning, 2l{3), 33-48. https://doi.org/l0.14434/josotl.v21i3.30744
Mandalios,J. (2013). RADAR: An approach for helping students evaluate internet sources. Journal of Information Science, 39(4), 470-478. https://doi.Org/lO.l 177/0165551513478889
Meola, M. (2004). Chucking the checklist: A contextual approach to teaching undergraduates web-site evaluation, portal: Libraries and the Academy, 4(3), 331-344. https://doi.org/l0.1353/pla.2004.0055
Meriam Library, California State University, Chico. (2010). Evaluating information- Applying the CRAAP test, https://library.csuchico.edu/help/source-or-information-good
Ostenson, J. (2014). Reconsidering the checklist in teaching internet source evaluation. portal: Libraries and the Academy, 14(1), 33-50. https://doi.org/l0.1353/pla.2013.0045
Radom, R., & Gammons, R. W. (2014). Teaching information evaluation with the five Ws: An elementary method, an instructional scaffold, and the effect on student recall and application. Reference and User Services Quarterly, 53(4), 334-347. https://doi.org/l0.5860/rusq.53n4.334
Rose-Wiles, L. M. (2024). The framing of authority in the ACRL framework on information literacy: Multidisciplinary perspectives on truth, authority, expertise, and belief. Reference Services Review, 52(2), 202-217. https://doi.0rg/lO.l 108/RSR-02-2024-0003
Rowley, J., Johnson, F., & Sbaffi, L. (2015). Students' trust judgements in online health information seeking. Health Informatics Journal, 2l{4), 316-327. https://doi.org/l0.1177/l460458214546772
Rynes, S. L., Colbert, A. E., & O'Boyle E. H. (2018). When the 'best available evidence' doesn't win: How doubts about science and scientists threaten the future of evidence-based management. Journal of Management, 44(8), 2995-3010. https://doi.org/l0.1177/0149206318796934
Saunders, L., & Budd, J. (2020). Examining authority and reclaiming expertise. The Journal of Academic Librarianship, 46(1), Article 102077. https://doi.Org/l0.1016/j.acalib.2019.102077
Scull, A. (2019). Information creation as a process: With an emphasis on creation. College & Research Libraries News, 80(2), 78-81. https://doi.Org/l0.5860/crln.80.2.78
Seeber, K. (2017, March 18). Wiretaps and CRAAP. Kevin Seeber, MLS. https://kevinseeber.com/blog/wiretaps-and-craap/
Silva, E., Green, J., & Walker, C. (2018). Source evaluation behaviours of first-year university students. Journal of Information Literacy, 12(2), 24-43. https://doi.Org/lO.11645/l2.2.2512
Smagorinsky, Peter. (2008). The method section as the conceptual epicenter in constructing social science research reports. Written Communication, 25(3), 389-411. https://doi.org/l0.1177/0741088308317815
Toff, B., Badrinathan, S., Mont'Alverne, C, Arguedas, A. R., Fletcher, R., & Nielsen, R. K. (2021). Overcoming indifference: What attitudes towards news tell us about building trust. Reuters Institute for the Study of Journalism, https://doi.org/l0.60625/risj-0h47-ja26
Valenza, J. (2020, November 1). Enough with the CRAAP: We're just not doing it right. Neverending Search, https://blogs.slj.eom/neverendingsearch/2020/l 1/01/enough-with -the-craap-were-just-not-doing-it-right
Vamanu, I. & Zak, E. (2022). Information source and content: Articulating two key concepts for information evaluation. Information and Learning Science, 123(1/2), 65-79. https://doi.org/lO.1108/ILS-09-2021-0084
Warwick, C, Rimmer, J., Blandford, A., Gow, J. & Buchanan, G. (2009). Cognitive economy and satisficing in information seeking: A longitudinal study of undergraduate information behavior. Journal of the American Society for Information Science and Technology, 60(12), 2402-2415. https://doi.org/l0.1002/asi.21179
Wineburg, S. & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record: The Voice of Scholarship in Education, 121(11), 1-40. https://doi.org/l0.1177/016146811912101102
© 2025. This work is published under https://pdxscholar.library.pdx.edu/comminfolit/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.