Abstract
The authors conducted a performance-based assessment of information literacy to determine if students in a first-year experience course were finding relevant sources, using evidence from sources effectively, and attributing sources correctly. A modified AAC&U VALUE rubric was applied to 154 student research papers collected in fall 2015 and fall 2016. Study results indicate that students in the sample were able to find relevant and appropriate sources for their research papers; however, they were not using evidence to effectively support an argument or attributing sources correctly. The authors discuss changes to the library instruction curriculum informed by the assessment results.
Keywords: information literacy, first-year students, rubric assessment, library instruction, one-shot
The Information Literacy & Undergraduate Support Department at the University of Northern Colorado's James A. Michener Library helps develop information literate students through a combination of course-integrated sessions and credit-bearing courses. University 101 (UNIV 101), a first-year experience course that aims to assist students in the transition from high school to college, is an elective with a broad focus on reading, writing, critical thinking, and communication skills. Course objectives include using effective research skills to retrieve and evaluate information from a variety of sources. The library session is a required component of the UNIV 101 curriculum and supports a research paper assignment requiring students to cite a minimum of six peer-reviewed sources.
In 2014, librarians at the University of Northern Colorado developed a common lesson plan for the UNIV 101 sessions that focused on understanding peer review, developing keywords, and using a bibliography to find additional sources. Assessing the session learning outcomes involved two multiple-choice questions that asked students about the most important source criteria for their assignment and the discovery tool features that assist with finding appropriate sources. Data from this assessment showed students understood what types of sources they needed to find and the most useful database tools to use. However, anecdotal evidence from course instructors indicated that students were not using appropriate peer-reviewed sources in their research papers. Based on this feedback, the librarians revised the session lesson plan for 2015 to focus on finding and identifying peerreviewed articles. Formative assessment during the session and subsequent review of students' sources collected through an online worksheet suggested that students were better skilled at finding peer-reviewed articles on their topics than course instructors perceived. In order to determine if and how students were incorporating these sources into their research papers, the authors began the present performance-based assessment of UNIV 101 students' information literacy skills.
The purpose of this study was to assess information literacy skills among students enrolled in UNIV 101. To do so, the authors applied a modified Association of American Colleges and Universities (AAC&U) VALUE rubric to student papers collected in fall 2015 and fall 2016. Specifically, this study asked the following questions:
* Are students finding relevant and appropriate sources for their final research papers?
* Are students using evidence from sources effectively?
* Are students attributing sources correctly?
By conducting this assessment, the researchers hoped to collect baseline data on first-year students' information literacy abilities and use direct assessment of student learning to improve course-integrated library sessions.
Literature Review
Although assessment has become common practice in information literacy instruction, library literature often focuses on student perceptions of instruction or on self-assessment of skills. In a 2012 review of the literature, Schilling and Applegate found self-reported attitudinal surveys to be the most common method of assessing information literacy. While acknowledging that attitudes are important to the learning process, they argue for increased use of methods that provide direct evidence of student skills. Performance-based assessment works well for collecting this evidence, allowing students to demonstrate understanding and to apply knowledge and skills in a variety of complex situations (Marzano, Pickering, & McTighe, 1994). Benefits of performance-based assessments include the ability to assess higher-order thinking skills, such as synthesis, and the ability to use results to improve teaching and learning (Oakleaf, 2008). Rubrics are increasingly being used for performancebased assessment of students' information literacy skills. The predetermined standards of rubrics contribute to more consistent scoring of student work, and the level of detail in rubrics provides rich data that librarian instructors can use to identify gaps in student understanding and adjust instruction programs accordingly (Oakleaf, 2008).
Rubric-based Analysis of Sources
Librarians have used rubrics often to evaluate students' abilities to find, evaluate, and cite sources. Knight (2006) applied a rubric mapped to the ACRL Information Literacy Competency Standards to first-year students' annotated bibliographies to determine if students were meeting the course learning objectives related to finding reputable sources, evaluating credibility, citation correctness, and thoroughness of annotations. Carbery and Leahy (2015) also assessed first-year students' annotated bibliographies using a locally developed rubric that included source variety, citation quality, and completeness of annotations. While applying a rubric to annotated bibliographies can measure students' abilities to find, evaluate, and cite sources, analyzing bibliographies alone cannot reveal students' ability to use those sources as evidence in support of an argument.
Rubric-based studies evaluating campus outcomes for information literacy also have focused primarily on finding, evaluating, and citing sources. Diller and Phelps (2008) looked at a small sample of ePortfolios created by entry-level and transfer students as part of campus assessment of general education. Their rubric measured a broader range of information literacy abilities including determining an information need, designing a search strategy, evaluating information, and using information effectively, ethically, and legally; however, use of evidence was not included among their campus information literacy and communication outcomes. Hoffmann and LaBonte (2012) similarly applied a rubric to research assignments created by first- and third-year students over a three-year period. In addition to annotated bibliographies, they assessed problem/solution papers, narratives, and group and individual research papers. However, their analysis was limited to search strategy, source variety, and evaluation in alignment with their campus general education outcomes for information literacy. While the decision to align information literacy assessment with campus outcomes is sound, there remains a lack of information about students' ability to use sources as evidence.
some studies have sought to assess how students used source material in their papers by examining in-text citations and quotes in student work. Samson (2010) developed an assessment instrument that identified quantifiable measures for each of the ACRL standards to compare the information resource use of first-year students to capstone students. The instrument included source type and quantity; the number of short quotes, long quotes, and in-text citations; the presence of a thesis or original hypothesis; and accuracy of citation style. However, the instrument did not assess whether the evidence presented in the quotes and in-text citations supported the thesis or hypothesis. McClure, Cooke, and Carlin (2011) used citation analysis to examine source quantity, quality, and citation accuracy in first-year students' final essays. They then used frequency and length of in-text citations and quotes to assess how well students were using sources in their writing, but again they did not investigate how the sources were used in context. This method of counting in-text citations and quotes appears ill-suited for determining how well a student has incorporated information into their knowledge base. As Lundstrom, Diekema, Leary, Haderlie, and Holliday's (2015) work on information synthesis suggests, neither the number of citations in a paragraph nor the number of paragraphs with a citation are effective measures of synthesis in student research papers.
Rubric-based Analysis of Evidence
A limited number of rubric-based studies of information literacy have recognized the importance of understanding how students use sources in their writing. Emmons and Martin (2002) examined over 200 bibliographies to compare the quantity, variety, currency, and accuracy of sources selected by students in a first-year writing course, before and after implementing an inquiry-based library instruction program. However, as they argue, "student research is not just searching for sources" (p. 550), so they also read and applied a rubric to 60 research essays to evaluate how students engaged sources in their writing. Scharf, Elliot, Huey, Briller, and Joshi (2007) applied a rubric to 100 research papers to assess graduating senior students' abilities to use, cite, and integrate appropriate sources. Rosenblatt (2010) also examined a small number of upper-division students' research papers, combining citation analysis with a rubric to determine if students were able to find and synthesize relevant sources.
Several recent studies similarly have used rubrics to assess first-year students' abilities to find, cite, and use appropriate sources. Luetkenhaus, Borrelli, and Johnson (2015) analyzed 275 final research papers from students in a required first year course using a locally developed rubric that mapped to institutional learning outcomes for information literacy and critical thinking. The Claremont Colleges Library employed a rubric-based methodology to assess first-year students' information literacy skills and how varying levels of librarian involvement with the course affected student learning. Their rubric, modified from a rubric developed by Carleton College, assessed attribution, evaluation of sources, and communication of evidence (Booth, Lowe, Tagge, & Stone, 2015; Lowe, Booth, Stone, & Tagge, 2015). More recently, Davidson Squibb and Mikkelsen (2016) evaluated the impact of a new introductory writing curriculum on students' information literacy skills by applying a course-specific rubric, which included source suitability, citation style, argument, and evidence to final papers.
AAC&U VALUE rubrics
While most rubric-based assessments of student research projects have developed the rubric locally, a growing number have adopted AAC&U VALUE rubrics. The Rubric Assessment of Information Literacy Skills (RAILS) project used the AAC&U VALUE rubrics as a starting point for their research, which examined 1,000 student artifacts from nine institutions (Belanger, Zou, Mills, Holmes, & Oakleaf, 2015). Brown and Souza-Mort (2015) used the Information Literacy VALUE Rubric to assess artifacts produced by community college students. Holliday et al. (2015) used a modified version of this rubric to compare students' research skills at progressing levels in the curriculum. Turbow and Evener (2016) also explored using this rubric to assess information literacy among graduate students in the health sciences. Although they agreed that the Information Literacy VALUE Rubric was appropriate for evaluating graduate students, some raters found it difficult, if not impossible, to score the "Access the needed evidence" category because students were not required to describe their search strategies in the assignment (Turbow & Evener, p. 211). Lundstrom et al. (2015) took a different approach to adopting the rubrics by using criteria from the Inquiry and Analysis VALUE Rubric rather than the Information Literacy VALUE Rubric in their study of a librarian-led information synthesis lesson.
While there is ample evidence to support the use of rubrics for performance-based assessment of information literacy, many of the examples in the literature focus on students' abilities to find, evaluate, and cite sources. There remains limited research discussing the assessment of students' ability to use information sources effectively as evidence to support an argument. The present study adds to this body of literature by discussing performancebased assessment of students' abilities to both find sources and use evidence.
Methods
Rubric
After reviewing the AAC&U VALUE rubrics, the authors designed the project rubric based on three of the AAC&U VALUE rubrics: Communication, Critical Thinking, and Information Literacy. The University of Northern Colorado recently adopted portions of these rubrics for assessment of general education, which influenced the decision to use them in the present study. Furthermore, the authors wanted to explore students' use of evidence and skill in using sources rather than simply looking at source choice. Of particular interest was how students used sources to communicate information and how students used that information to support an argument. The project rubric combined the Sources and Evidence dimension from the Communication Rubric, the Evidence dimension from the Critical Thinking Rubric, and the Access and Use Information Ethically and Legally dimension from the Information Literacy Rubric (see Appendix A for the final rubric).
The AAC&U VALUE rubrics are developmental rubrics designed to measure student attainment of learning outcomes over the course of their undergraduate careers. The Benchmark (1) and Capstone (4) levels of the rubrics describe the levels of learning that students are expected to demonstrate at the beginning and completion of their degree respectively. Thus, the authors expected to see first-year students who were enrolled in a course targeting research and writing skills and had attended a library session to perform at the Lower Milestone (2) level of the rubric.
Sample
In total, the authors scored 154 UNIV 101 student research papers collected over two semesters using the modified VALUE rubric. In fall 2015, 124 papers were scored (out of 458 enrolled students). The UNIV 101 program coordinator provided a random sample of 269 research papers written by students enrolled in all 18 sections of the course. The authors eliminated 143 draft papers from the sample based on dates on the title pages. Two duplicate papers were also eliminated. The authors scored a smaller sample of 30 UNIV 101 research papers (out of 502 enrolled students) from fall 2016 due to time limitations. The program coordinator provided two papers each from 15 of the 22 UNIV 101 sections taught that semester. The authors did not need to use date criteria to remove drafts from this sample because final papers were requested specifically. Identifying information was removed from the papers and they were assigned numbers before scoring began.
Rating Procedure
The authors completed a norming process, which is used "to achieve consistent and reliable use of a rubric among numerous raters" (Holmes & Oakleaf, 2013, p. 599). For the first round of norming, all three researchers read and independently applied the rubric to 20 papers from the fall 2015 sample. Percent agreement was calculated between coders (Table 1), then coders met to revise the rubric and reach 100% consensus on five of the paper scores (MacQueen, McLellan-Lemal, Bartholow, & Milstein, 2008). For the second round of norming, each researcher independently rescored the remaining 15 papers from the first round of norming with the revised rubric. The researchers reconvened to discuss the papers, agree on the scoring, and identify exemplars for the rubric levels to assist with consistent scoring over time.
Percent agreement from the second round of norming is included in Table 1. These numbers generally indicate better agreement among raters, but the researchers did not reach 85% agreement or higher sufficient to score independently (MacQueen et al., 2008). Consequently, two raters read and scored the remaining papers. Each pair of raters met to discuss disagreements in scoring in order to reach 100% consensus on the rubric scores. Three rater pairs shared the scoring equally. For the fall 2015 analysis, each researcher independently scored 70 papers.
The researchers used the same rating procedure for the fall 2016 analysis except there was only one round of norming, and there were no changes to the rubric. For the fall 2016 analysis, each rater independently scored 18 or 19 papers.
Findings
Fall 2015 Findings
Across the 124 paper sample, student performance in each rubric category averaged between the Benchmark (1) and Lower Milestone (2) level. Mean scores for fall 2015 showed that students only met the expected Lower Milestone in the sources category of the rubric, suggesting that students were able to find relevant and appropriate sources for their research papers. However, students were not using the sources effectively as evidence nor were they accurately attributing their sources. Figure 1 shows the distribution of rubric scores for fall 2015.
The Sources category, which indicates how well a student is able to find relevant and credible sources, had the highest mean of 2.01 out of 4. Over half of the papers (60%) scored at the Lower Milestone and an additional 22% scored at the Upper Milestone level. Fifteen percent scored at the Benchmark level, while 3% did not meet Benchmark level performance. Students who scored at the Lower Milestone in the Sources category typically submitted quality reference lists that included relevant, credible-often primarily peerreviewed-sources. Figure 2 shows an example of a reference list that scored at the Lower Milestone level.
Evidence, which gauges how well a student can use information to support a conclusion, was the lowest scoring category with a mean score of 1.47. In this category, a majority of papers (59%) scored at the Benchmark level, a third (33%) were at the Lower Milestone, 7% at the Upper Milestone, and only 1% did not meet Benchmark level performance. Figure 3 shows an excerpt from a paper that scored at the Benchmark level (the corresponding reference list is shown in Figure 2). Students who scored at this level may have found appropriate sources, but reading their papers to assess how they used these sources revealed problems with synthesis and analysis. Students at the Benchmark level attempted to use sources as evidenced by the use of in-text citations (Figure 3). However, students at this level often took information from sources without interpretation or evaluation as seen in the first sentence of the excerpt:
The astonishing ability of GMOs to shape to their environment offers promising results in meeting some of the greatest goals set forth in this century (Bawa & Anilakumar, 2012).
Furthermore, students at the Benchmark level did not consistently support ideas with evidence. In this excerpt, the student claims that "advances in GMOs" can eliminate genes that cause allergic reactions, specifically gluten, without providing a citation.
The Access and Use category measures a student's ability to use information ethically through appropriate citation. The mean score in this category was 1.50, very close to the mean for Evidence, but the scores were more distributed in the Access and Use category. More than a third of the papers (38%) scored at the Benchmark level, approximately a third (31%) scored at the Lower Milestone, and the remaining papers were split between the Upper Milestone (14%) and below Benchmark (15%), with a small percentage of papers (2%) scoring at the Capstone level. Using information in ways not true to context was a common problem seen in the Access and Use category. Figure 3 shows an example of this problem.
Here the student refers to a study on "fatal food related allergic reactions" and cites Gaivoronskaia and Hvinden. However, the student's reference list (Figure 2) shows that the Gaivoronskaia and Hvinden study is actually about perceptions of genetically modified food among people with food allergies. Because the student used information found in the introduction or literature review section and did not cite the article as a secondary source, it is classified as a citation error.
Curricular Changes
A number of changes were made to the library session for fall 2016 based on what was learned from the fall 2015 analysis of UNIV 101 student papers. Many students appeared to be using information found in the introduction or literature review of studies rather than the results of the research, making it difficult for them to form an argument based on evidence. The authors speculated that this problem represented an unmet student need and that students would benefit from instruction on how to read and understand studies in order to leverage them as evidence.
First, the library session was lengthened from 50 to 75 minutes. This change allowed the activity on identifying peer-reviewed articles to remain, which assessment results suggested was helping students identify appropriate sources, while also providing time for a new activity about reading research papers. For students who struggled to use evidence to support their arguments, the library instructor showed how study results are often the most important part of a study and suggested that it is therefore helpful to read the discussion or conclusion section of an article first. To reinforce this point, students worked in pairs and then as a class to determine whether a study's conclusions were relevant to a sample research question. Students were also encouraged to use reference lists, literature reviews, and news reports about studies to track original sources rather than relying on the secondary sources (see lesson plan in Appendix B). Finally, optional workshops were offered for any UNIV 101 students who wanted help with properly citing and formatting sources. All fall 2016 UNIV 101 library sessions used this revised lesson plan.
Fall 2016 Findings
There was no observable improvement in mean scores in fall 2016. Across the 30 paper sample, the Sources category again had the highest mean score of 1.87 out of 4. Over half of the papers (60%) scored at the Lower Milestone level, 17% scored at the Upper Milestone and Benchmark levels, and 7% failed to meet Benchmark level performance. The mean score for Access and Use was 1.37. Nearly half (43%) of the papers scored at the Benchmark level, over a third (37%) at the Lower Milestone level, 13% below Benchmark, and 7% at the Upper Milestone level. Evidence again had the lowest mean score, though at 1.33 it was very close to the Access and Use mean. A majority of papers (67%) scored at the Benchmark level, 23% at the Lower, and 7% at the Upper Milestone levels, while 3% scored below Benchmark level. Figure 4 shows the distribution of rubric scores.
Discussion
The findings suggest that first-year students enrolled in UNIV 101 were able to find relevant and appropriate sources for their final research papers. However, students struggled to use evidence from those sources effectively, often failing to offer an interpretation or evaluation of them. Students also struggled to attribute sources correctly. students used information found in the introduction or literature review of research papers instead of the reported research results. As an example, students commonly referred to studies that, when examined, were focused on topics irrelevant to their own. While this problem was considered a citation error, the authors believe it represents a fundamental misunderstanding of what matters in peer-reviewed research articles and how to use them as evidence.
The findings of the present study are consistent with much of the previous research on students' abilities to find and use sources effectively. Students can find relevant and appropriate sources for their research needs (Carbery & Leahy, 2015; Samson, 2010), but often struggle to analyze or synthesize that information (Davidson Squibb & Mikkelsen, 2016; Emmons & Martin, 2002; Holliday et al., 2015; Luetkenhaus et al., 2015; Rosenblatt, 2010; Scharf et al., 2007). Where the present findings differ is in students' ability to attribute sources correctly. Previous research suggests that students can adequately cite sources (Carbery & Leahy, 2015; Knight, 2006; Luetkenhaus et al., 2015), but UNIV 101 students performed poorly in this rubric category. This difference could be because the researchers looked for a broad range of citation behaviors necessary for ethical use of information including appropriate choice of in-text citations and references, paraphrasing, summary, or quoting, attention to the original context, and recognition of common knowledge. One institution in the RAILS project applied the Access & Use Information dimension of the Information Literacy VALUE Rubric in a similar way, scoring adherence to citation style conventions, recognition of common knowledge, and appropriate use of paraphrasing, summary, or quoting. Those results, like the present study, suggest that students are not consistently demonstrating these citation behaviors (RAILS, 2014).
Limitations of this study, including the smaller sample size for fall 2016, make it difficult to compare the mean scores from fall 2015 to fall 2016. Another limitation was the uncertainty about whether analyzed papers were drafts or final papers. Papers from the fall 2015 sample dated before the assignment deadline for the draft paper were eliminated, but this procedure was not followed for the fall 2016 sample because final papers had been requested from the UNIV 101 program coordinator. However, dates on some of the papers in fall 2016 sample suggested drafts may have been included. The researchers also may have erroneously eliminated some papers from the fall 2015 sample if students had not changed the dates on their title pages before submitting the final paper.
Though the results did not show improvement in students' papers between fall 2015 and fall 2016, the authors believe the instructional shift was merited based on two semesters of data suggesting first-year students enrolled in UNIV 101 can find sources but struggle to use them as evidence. The authors are by no means the first to examine ways of teaching students how to better use the information they find (e.g., Bronshteyn & Baladad, 2006;
Lundstrom et al., 2015; MacMillan & Rosenblatt, 2015; Woodward & Ganski, 2013). Some will argue that reading and synthesizing information is the purview of disciplinary faculty, not librarians. However, the authors agree with MacMillan and Rosenblatt's assertion:
in demonstrating our value to our institutions we have to show that our concern for information literacy does not stop when the student find the 10 articles mandated by the instructor, but continues to the point where the student has used those resources effectively, a task that cannot be accomplished without reading. (p. 761)
Furthermore, this shift maps well to the spirit of the Framework for Information Literacy for Higher Education, which suggests more broadly that information literacy instruction needs to focus less on helping students find sources and more on preparing students to participate in scholarly conversations (Association of College and Research Libraries, 2015).
Future research on information literacy should focus on students' abilities to use sources as evidence rather than their abilities to find them. T o understand students' information literacy abilities librarians must look beyond the reference list and closely examine how students use sources in context. In the future, the researchers plan to analyze a larger sample of papers to assess if the decrease in mean scores observed between fall 2015 and fall 2016 persists. The researchers also plan to undertake a longitudinal assessment project and collect research papers written by students during their sophomore, junior, and senior years to analyze how their information literacy skills change over the course of their academic careers.
Let us know how access to this document benefits you.
Follow this and additional works at: https://pdxscholar.library.pdx.edu/comminfolit
Recommended Citation
Markowski, B., McCartin, L., & Evers, S. (2018). Meeting Students Where They Are: Using Rubric-based Assessment to Modify an Information Literacy Curriculum. Communications in Information Literacy, 12 (2), 128-149. https://doi.org/10.15760/ comminfolit.2018.12.2.5
This Research Article is brought to you for free and open access. It has been accepted for inclusion in Communications in Information Literacy by an authorized administrator of PDXScholar. For more information, please contact [email protected].
Markowski, B., McCartin, L.F., & Evers, S. (2018). Meeting students where they are: Using rubric-based assessment to modify an information literacy curriculum. Communications in Information Literacy, 12(2), 128-149.
Copyright for articles published in Communications in Information Literacy is retained by the author(s). Author(s) also extend to Communications in Information Literacy the right to redistribute this article via other scholarly resources and bibliographic databases. This extension allows the authors' copyrighted content to be included in some databases that are distributed and maintained by for-profit companies. All other rights of redistribution are licensed by Communications in Information Literacy under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BYNC-SA 4.0).
References
Association of College and Research Libraries (ACRL). (2015). Framework for information literacy for higher education. Retrieved from http://www.ala.org/acrl/standards/ilframework
Belanger, J., Zou, N., Mills, J. R., Holmes, C., & Oakleaf, M. (2015). Project RAILS: Lessons learned about rubric assessment of information literacy skills. portal: Libraries and the Academy, 15(4), 623-644. https://doi.org/ 10.1353/pla.2015.0050
Booth, C., Lowe, M. S., Tagge, N., & Stone, S. M. (2015). Degrees of impact: Analyzing the effects of progressive librarian course collaborations on student performance. College & Research Libraries, 76(5), 623-651. https://doi.org/ 10.5860/crl.76.5.623
Bronshteyn, K., & Baladad, R. (2006). Librarians as writing instructors: Using paraphrasing exercises to teach beginning information literacy students. Journal of Academic Librarianship, 32(5), 533-536. https://doi.org/ 10.1016/j.acalib.2006.05.010
Brown, E. Z., & Souza-Mort, S. (2015). LEAP rubrics and information literacy assessment: We think you need a chaser with that one-shot. In D. M. Mueller (Ed.), Creating sustainable community: The proceedings of the ACRL 2015 Conference, March 25-28, Portland, Oregon (pp. 403-408). Chicago: Association of College and Research Libraries. Retrieved from http://www.ala.org/acrl/ files/conferences/confsandpreconfs/2015/Brown_SouzaMort.pdf
Carbery, A., & Leahy, S. (2015). Evidence-based instruction: Assessing student work using rubrics and citation analysis to inform instructional design. Journal of Information Literacy, 9(1), 74-90. https://doi.org/10.11645/9.1.1980
Diller, K. R., & Phelps, S. F. (2008). Learning outcomes, portfolios, and rubrics, oh my! Authentic assessment of an information literacy program. portal: Libraries and the Academy, 8(1), 75-89. https://doi.org/ 10.1353/pla.2008.0000
Davidson Squibb, S., & Mikkelsen, S. (2016). Assessing the value of course-embedded information literacy on student learning and achievement. College & Research Libraries, 77(2), 164-183. https://doi.org/ 10.5860/crl.77.2.164
Emmons, M., & Martin, W. (2002). Engaging conversation: Evaluating the contribution of library instruction to the quality of student research. College & Research Libraries, 63(6), 545-560. https://doi.org/ 10.5860/crl.63.6.545
Hoffman, D. A., & LaBonte, K. (2012). Meeting information literacy outcomes: Partnering with faculty to create effective information literacy assessment. Journal of Information Literacy, 6(2), 70-85. https://doi.org/10.11645/6.2.1615
Holliday, W., Dance, B., Davis, E., Fagerheim, B., Hedrich, A., Lundstrom, K., & Martin, P. (2015). An information literacy snapshot: Authentic assessment across the curriculum. College & Research Libraries, 76(2), 170-187. https://doi.org/ 10.5860/crl.76.2.170
Holmes, C., & Oakleaf, M. (2013). The official (and unofficial) rules for norming rubrics successfully. Journal of Academic Librarianship, 39(6), 599-602. https://doi.org/ 10.1016/j.acalib.2013.09.001
Knight, L. A. (2006). Using rubrics to assess information literacy. Reference Services Review, 34(1), 43-55. https://doi.org/10.1108/00907320610640752
Lowe, M. S., Booth, C., Stone, S., & Tagge, N. (2015). Impacting information literacy learning in first-year seminars: A rubric-based evaluation. portal: Libraries and the Academy, 15(3), 489-512. https://doi.org/ 10.1353/pla.2015.0030
Luetkenhaus, H., Borrelli, S., & Johnson, C. (2015). First year course programmatic assessment: Final essay information literacy analysis. Reference & User Services Quarterly, 55(1), 49-60. https://doi.org/ 10.5860/rusq.55n1.49
Lundstrom, K., Diekema, A. R., Leary, H., Haderlie, S., & Holliday, W. (2015). Teaching and learning information synthesis: An intervention and rubric based assessment. Communications in Information Literacy, 9(1), 60-82. https://doi.org/10.15760/ comminfolit.2015.9.1.176
MacMillan, M., & Rosenblatt, S. (2015). They've found it. Can they read it? Adding academic reading strategies to your IL toolkit. In D. M. Mueller (Ed.), Creating sustainable community: The proceedings of the ACRL 2015 Conference, March 25-28, Portland, Oregon (pp. 757-762). Chicago: Association of College and Research Libraries. Retrieved from http://www.ala.org/acrl/files/conferences/confsandpreconfs/2015/MacMillan_Rosenbl att.pdf
MacQueen, K. M., McClellan-Lemal, E., Bartholow, K., & Milstein, B. (2008). Team-based codebook development: Structure, process, and agreement. In G. Guest & K. M. MacQueen (Eds.), Handbook for team-based qualitative research (pp. 119-135). Lanham, MD: AltaMira Press.
Marzano, R. J., Pickering, D., & McTighe, J. (1993). Assessing student outcomes: Performance assessment using the dimensions of learning model. Alexandria, VA: Association for Supervision and Curriculum Development.
McClure, R., Cooke, R., & Carlin, A. (2011). The search for the skunk ape: Studying the impact of an online information literacy tutorial on student writing. Journal of Information Literacy Assessment, 5(2), 26-45. https://doi.org/10.11645/5,2,1638
Oakleaf, M, (2008), Dangers and opportunities: A conceptual map of information literacy assessment approaches. portal: Libraries and the Academy, 8(3), 233-253, https://doi.org/ 10.1353/pla.0.0011
Rubric Assessment of Information Literacy Skills (RAILS). (2014). Institution 4. Retrieved from http://railsontrack.info/media/documents/2014/7/4_rails.pdf
Rosenblatt, S. (2010). They can find it, but they don't know what to do with it: Describing the use of scholarly literature by undergraduate students. Journal of Information Literacy, 4(2), 50-61. https://doi.org/10.11645/4.2.1486
Samson, S. (2010). Information literacy learning outcomes and student success. Journal of AcademicLibrarianship, 36(3), 202-210. https://doi.org/ 10.1016/j.acalib.2010.03.002
Scharf, D., Elliot, N., Huey, H. A., Briller, V., & Joshi, K. (2007). Direct assessment of information literacy using writing portfolios. Journal of Academic Librarianship, 33(4), 462477. https://doi.org/ 10.1016/j.acalib.2007.03.005
Schilling, K., & Applegate, R. (2012). Best methods for evaluating educational impact: A comparison of the efficacy of commonly used measures of library instruction. Journal of the Medical Library Association, 100(4), 258-269. https://doi.org/10.3163/15365050.100.4.007
Turbow, D. J., & Evener, J. (2016). Norming a VALUE rubric to assess graduate information literacy skills. Journal of the Medical Library Association, 104(3), 209-214. https://doi.org/ 10.5195/jmla.2016.13
Woodward, K. M., & Ganski, K. L. (2013). BEAM lesson plan. UWM Libraries Instructional Materials. Retrieved from https://dc.uwm.edu/lib_staff_files/1
(ProQuest: Appendix omitted.)
Use of citations and references
* Errors make accessing original source difficult
o No page numbers for in-text citation quote (missing from Ref. list o.k.)
o No URL for website
* Stylistic mistakes (e.g. doi, capitalization, only one author cited but can still match to Ref list) are allowed
Choice of paraphrasing, summary, or quoting
* All papers should have quotes - if not, is the student really paraphrasing
* Quote needs to make sense but if it seems like a Freshman would have a hard time paraphrasing it, consider it correct (see paper 54)
* See quote on paper 79, p. 4 as an example of an o.k. quote
Using information in ways that are true to original context
* Examples of using information in ways that aren't true to original context include:
o Citing someone citing someone else
o Not using the research of the study. Remember to look at source titles in Reference List to check for specific topics not discussed in student paper
o Obviously using information from the abstract (look for titles in foreign languages)
(ProQuest: Appendix omitted.)
University of Northern Colorado UNIV 101 Library Session Lesson Plan
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2018. This work is published under NOCC (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The authors conducted a performance-based assessment of information literacy to determine if students in a first-year experience course were finding relevant sources, using evidence from sources effectively, and attributing sources correctly. A modified AAC&U VALUE rubric was applied to 154 student research papers collected in fall 2015 and fall 2016. Study results indicate that students in the sample were able to find relevant and appropriate sources for their research papers; however, they were not using evidence to effectively support an argument or attributing sources correctly. The authors discuss changes to the library instruction curriculum informed by the assessment results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 University of Northern Colorado