Content area

Abstract

How does a medical practitioner know which treatments work? How is the standard of care established, and how is it updated? One approach is to systematically review and assess published medical articles, to produce recommendations for practice and for further research. Creating such a review is a labor-intensive process: finding, organizing, and summarizing relevant medical research requires a considerable time investment. Can we, natural language practitioners, assist in creating these reviews? In this defense, I develop a demonstration toolkit and user interface designed to identify where these (imperfect) automation tools are useful now, where they may pose risks when misapplied, and what gaps exist between these technologies and what systematic reviewers want for their workflows.

This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.

This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.

We begin with search assistance (Part I). In this portion I build tools to enable systematic reviewers in their initial article screening process: given a review topic, I automatically generate PubMed queries. Crafting these queries can be challenging, often requiring from a medical librarian, an expert not always available to the reviewers. I automate production of these queries, and enable interviews with systematic reviewers to gauge the usefulness and effectiveness of such a tool.

We continue into evidence extraction (Part II). Given a clinical study, how do we know what interventions it considers, over which patients, and if the treatments worked? I present the Evidence Inference dataset - a dataset of randomized control trials marked with patient populations, medical interventions (and any comparison interventions), and what outcomes the study measured. In this section I develop and refine these datasets and associated modeling challenges.

We step into the realm of evidence synthesis (Part III). I produce a dataset (MS2) of systematic reviews and build models to automate generating a textual synthesis. Then we study the effectiveness of standard opinion summarization and transformer models multi-document summarization (is such a summarization necessarily a synthesis?). In both cases we use the Evidence Inference dataset produced above as important evaluation measures.

Finally, I conclude by building a demonstration application. In this work, I assemble these parts and accompanying models into an application to assist systematic reviewers in their workflow. I conduct an assessment of where these components do (and do not) fit into systematic reviewer workflows (Part IV), resulting in directions for future research.

Details

1010268
Business indexing term
Title
Automation Assistance for Systematic Reviewers
Number of pages
175
Publication year
2025
Degree date
2025
School code
0160
Source
DAI-A 86/7(E), Dissertation Abstracts International
ISBN
9798302165213
Committee member
Wang, Lucy Lu; Amir, Silvio; Bau, David
University/institution
Northeastern University
Department
Computer Science
University location
United States -- Massachusetts
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
31769847
ProQuest document ID
3156317499
Document URL
https://www.proquest.com/dissertations-theses/automation-assistance-systematic-reviewers/docview/3156317499/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic