Content area

Abstract

A review is textual feedback provided by a reviewer to the author of a submitted version. Peer reviews are used in academic publishing and in education to assess student work. While reviews are important to e-commerce sites like Amazon and e-bay, which use them to assess the quality of products and services, our work focuses on academic reviewing. We seek to help reviewers improve the quality of their reviews. One way to measure review quality is through metareview or review of reviews. We develop an automated metareview software that provides rapid feedback to reviewers on their assessment of authors’ submissions. To measure review quality, we employ metrics such as: review content type, review relevance, review’s coverage of a submission, review tone, review volume and review plagiarism (from the submission or from other reviews). We use natural language processing and machine-learning techniques to calculate these metrics. We summarize results from experiments to evaluate our review quality metrics: review content, relevance and coverage, and a study to analyze user perceptions of importance and usefulness of these metrics. Our approaches were evaluated on data from Expertiza and the Scaffolded Writing and Rewriting in the Discipline (SWoRD) project, which are two collaborative web-based learning applications.

Details

Title
Automated Assessment of the Quality of Peer Reviews using Natural Language Processing Techniques
Author
Ramachandran, Lakshmi 1 ; Gehringer, Edward F. 2 ; Yadav, Ravi K. 2 

 Pearson, Boulder, USA (GRID:grid.420876.f) (ISNI:0000 0001 0308 5052) 
 North Carolina State University, Raleigh, USA (GRID:grid.40803.3f) (ISNI:0000 0001 2173 6074) 
Pages
534-581
Publication year
2017
Publication date
Sep 2017
Publisher
Springer Nature B.V.
ISSN
15604292
e-ISSN
15604306
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2932202323
Copyright
© International Artificial Intelligence in Education Society 2017.