Content area

Abstract

Software testing is both a critical part of software development and a topic still under-emphasized in computer science curricula. Automated grading systems have greatly improved the quality and frequency of feedback that students can receive on their programming assignments. Students are often expected to write tests for their programming projects, but state-of-the-art automated grading systems do not consider how to grade the quality of students’ tests. While instructors might manually grade student test cases, such an approach does not scale up to large class sizes and is similarly challenging to scale out across courses in a curriculum. Most existing automated approaches for grading student tests rely on traditional branch coverage metrics, which have repeatedly been shown not to correlate with test suite quality.

Our goal is to lower the human effort required to develop and deploy programming assignments that emphasize software testing while also providing higher-quality testing-focused feedback to students. We aim to do so by leveraging state-of-the-art software test quality measurements and novel automated feedback techniques, developing tools and techniques that make it easier for instructors to evaluate student test suite quality and provide tutoring to students. We focus particularly on mutation testing, which approximates the ability of a test suite to find bugs in an implementation. To support the thesis, we make three contributions. We begin with a study that examines current instructor perspectives and practices around teaching and evaluating software testing. Second, we examine using automatically-generated mutants as a substitute for instructor-written mutants (written manually) and as a substitute for real faults in student programs such as in an “all-pairs” grading approach. Finally, we examine the effects of using instructor-written hints as actionable automated feedback on student test suite quality.

Details

1010268
Title
Improving Tutoring and Evaluation of Software Testing in the Classroom
Number of pages
123
Publication year
2025
Degree date
2025
School code
0160
Source
DAI-B 86/11(E), Dissertation Abstracts International
ISBN
9798314854204
Committee member
Derbinsky, Nate; DeOrio, Andrew W.
University/institution
Northeastern University
Department
Computer Science
University location
United States -- Massachusetts
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32000192
ProQuest document ID
3200483971
Document URL
https://www.proquest.com/dissertations-theses/improving-tutoring-evaluation-software-testing/docview/3200483971/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic