Cellular pathology facilitates diagnosis and clinical trial treatment stratification. Genomic approaches divide traditional entities into smaller subcategories requiring large multicentre, often multinational, interventional studies. Such studies require high diagnostic standards and high reporting uniformity.
Digital pathology refers to the use of computer workstations to view digital whole slide images (WSIs) obtained from high resolution scanning of glass microscope slides ; uses include teaching, research or primary diagnostic reporting. Digital pathology devices or image analysis algorithms used in diagnostic reporting usually require regulatory approval such as FDA (USA) or CE IVD (Europe).
Digital pathology and image analysis could ensure greater accuracy, reproducibility and standardisation of study inclusion criteria and outcomes. Automated image analysis can help to extract measurements and features that are known to be relevant. Quantification of immunohistochemical staining (IHC) is one example where automated methods are already being incorporated with some success into clinical practice . In general, image analysis can provide a more reproducible quantification of morphology of individual cells or relevant tissue components such as glands. Increasingly, deep learning based approaches replace traditional image analysis algorithms. By training complex computation models directly from data it is often possible to build algorithms that surpass the capabilities of traditional image analysis methods. Examples include the scoring of PD‐L1 , quantification of immune infiltrates to predict outcomes in testicular tumours , detecting sentinel lymph node metastases and superior prediction of colorectal cancer outcome compared to standard morphological assessment . The interpretation of human versus deep learning studies in pathology can be affected by methodological considerations. For example, in the Bejnordi et al study, pathologists were required to review 129 consecutive cases in a limited time frame which does not reproduce typical pathology reporting (where cases are mixed and difficult cases are not subjected to a time limit) .
Machine learning holds the promise of providing us with capabilities that not only mimic but also enhance the visual analysis pathologists perform. Beck et al used a range of features provided by a commercial image analysis platform to identify novel stromal features associated with survival in breast cancer. Deep learning‐based artificial intelligence (AI) has identified novel markers of distant metastasis in colorectal cancer and novel indications of morpho‐molecular associations . Correlating morphological patterns with genetic stratifiers using traditional machine learning or deep learning demonstrates substantial promise . Ultimately, it may be possible to overcome limitations of existing categorical grading systems with such tools.
Image analysis is a complex task that may involve steps such as pre‐processing, accurate delineation of objects of interest, or the measurement of certain shape or texture features. Pre‐analytical considerations such as standardisation of immunohistochemical staining for reproducible results must be addressed, even for relatively simple tasks such as quantifying areas of tumour demonstrating positive staining for a cytokeratin marker using a colour threshold. Overview articles provide an introduction to different image analysis techniques.
It is possible to utilise both commercially available dedicated image analysis programmes designed for digital pathology applications [Visiopharm, Definiens Tissue Studio, Indica Labs HALO (Corrales, New Mexico, USA)], or alternatively an open source software programme such as QuPath Open Source Software for Quantitative Pathology or ImageJ (Image Process and Analysis in Java) . Deep learning methods are increasingly incorporated into commercial and open‐source image analysis applications. Radiological imaging applications used to assess lung and liver nodules already have FDA clearance for clinical use ; additionally, equivalent performance to human experts has been demonstrated in the interpretation of optical coherence tomography scans .
Image analysis has the potential to identify, extract and quantify features in greater detail in comparison to pathologist assessment, which may produce improved prediction models or perform tasks beyond manual capability such as generating tumour‐infiltrating lymphocyte (TIL) maps and glandular maps . There is now much interest in using these technologies in clinical trials .
An NCRI CM‐Path Quality Assurance in Clinical Trials Workshop was held on 21 March 2017 with representation from key members of industry, regulation and pathology. Regulation, training, oversight, laboratory processes and scoring/reporting were divided up into subgroup teams, which were presented for discussion by the QA panel. These topics are covered in separate articles . The scoring/reporting discussions included the use and validation of digital pathology and image analysis technologies in clinical trials. Currently there are limited published examples of their use in clinical trial practice, and methodological standardisation with minimal standards for publication requires further development.
In this article, we provide an overview of the utility of these technologies in clinical trials and discuss potential applications, current challenges, limitations and remaining unanswered questions that require addressing prior to routine study adoption (Table ).
Key issues that require consensus for the adoption of digital pathology and image analysis
|
|
|
|
|
|
|
|
The negative impact of inter‐observer variation in pathologists assessing lymphoma prompted the adoption of central case review in the 1960s . Widespread central pathology case review occurs in clinical trials; and is particularly valuable where rare and morphologically challenging diagnostic entities occur. Here, reporting pathologist experience can substantially influence reporting .
Currently, most central reviews occur after implementing patient management decisions for quality control prior to publication, rather than in ‘real time’ for trial entry. Central review requires additional slides to be produced from tissue blocks, which risks exhausting tissue required for direct patient care. In the event of review slide loss, it may not be possible to produce facsimile slides from limited remaining tissue.
Shortage of skilled trials pathologists is becoming a key issue in the conduct of clinical trials within the UK and digital pathology has the potential to ameliorate this issue by linking distant sites and expanding access to expert pathologists. Rapid dissemination of identical images to multiple centres allows simultaneous case review, reducing turnaround times and ensuring consensus opinion before therapeutic allocation. Case reclassification at the end of a study could indicate suboptimal patient treatment and negate the significance of investigational findings. Duplicate tissue sections have allowed simultaneous peripheral and central review reporting in a multinational interventional trial in nephroblastoma to address reporting variance before therapeutic allocation. In the study conducted by Vujanić et al, 9 of 248 simultaneously reported cases underwent diagnostic change resulting in alternative post‐operative therapy. Even minor error in diagnostic accuracy can affect the statistical significance of trial outcomes. Poor biomarker validation and positive publication bias can lead to expensive negative randomised control trials .
The central review of 552 radical prostatectomy specimens submitted for EORTC trial 22 911 showed low concordance between local assessment and central review for certain key parameters (evaluation of extra‐prostatic extension and surgical margin status). Central review can facilitate identification of discordant diagnostic parameters, but this may occur after recruitment of numerous patients.
Digital pathology can permit simultaneous case review and allow rapid access to international experts in entities of interest. Reduction or removal of the need to physically transfer slides and tissue blocks, avoiding damage or loss, is a defining logistical advantage of the digital approach . Although multinational commercial platform providers can overcome technical limitations provided by trials carried out in separate countries, differing requirements between nations regarding appropriate data governance could impede disseminated review of digital material. The European General Data Protection Regulations (GDPR) which came into force on 25 May 2018, replacing the Data Protection Act (1998), have strengthened and unified data protection for individuals within the European Union (EU), whilst addressing the export of personal data outside the EU .
The requirement to physically transfer slides may be retained by centralised genomic analysis and biobanking requirements, or by participating units lacking appropriate infrastructure to transfer slides for digitisation. Slide hosting repositories that can accommodate large slide images (typically 0.5‐4Gb), satisfy data protection legislation, meet trial protocol requirements and ensure that patient‐identifiable image‐associated metadata have been removed, provide a considerable resource burden. A variety of proprietary file formats and platforms pose interoperability challenges. Clinical trial protocol specification of image formats is a requirement in the absence of widespread vendor‐agnostic file formats.
Pathologists who have not transferred to digital pathology need to be trained in line with RCPath guidelines , and costing of appropriate pathologists and their training should be factored into the trial business plan. Additionally, construction of standardised operating procedures (SOPs) covering scanning equipment, database construction, anonymisation of images (where appropriate) and their transfer is required: potentially with input from a pathology working group. Appropriate training for digital image reporting and slide scanning is required, which should include appropriate information governance. SOPs for equipment should define acceptable scanning platforms, required file formats and required scanning resolution. If scanning and reporting for the trial can be done by laboratories operating under ISO 15189: 2012 or working to good clinical practice (GCP) standards, this is preferable. Should scanners with appropriate regulatory approval (CE IVD or FDA) be available, these should be used in preference to those without such quality marks.
Digital images can form image libraries allowing streamlined research as new innovations become available. H&E stained slides and notably immunofluorescence‐labelled slides degrade over time and require additional preparation (potentially including further slide production from tissue blocks).
Maintenance and sustainability of data storage are thus key requirements. Ensuring strict adherence to data governance frameworks and undertaking documentation that exceeds current regulatory requirements will hopefully ensure the availability of valuable material to future research projects.
Pathologist training for participation in clinical trials often involves face‐to‐face training. Digital pathology can facilitate flexible distance learning incorporating a wider audience than traditional face‐to‐face learning. Digital training programmes potentially reduce the requirement for expert pathologists leading the study to deliver face‐to‐face guidance. The requirement of training may be to improve standardisation of a feature routinely evaluated in diagnostic work, or alternatively to evaluate a novel feature not routinely reported: this may require a more comprehensive educational program.
Investigators can be trained by remote use of annotated digital material, either as part of an interactive web seminar or as an online training programme undertaken at the convenience of the participating pathologist. Sample images for reporting can be incorporated as an additional quality assurance measure. This is particularly valuable where pathologists are required to report a novel parameter in relatively few patients recruited from a single centre. The advantage of using direct visual feedback guided software for a novel parameter has been demonstrated in the assessment of percentage of TILs in breast cancer, which predicts response to neoadjuvant therapy . This is not a standard parameter for reporting in routine practice, and participating pathologists require training to report to an agreed standard. Improved concordance between pathologists reporting percentage of TILs was demonstrated after the use of training software utilising digital images of breast tumour tissue microarrays (TMAs) .
Digital pathology training software allows direct demonstration of compliance with clinical trial regulatory requirements and training logs, which can additionally be used by pathologists for revalidation and appraisal purposes. Clinical trials training applications can be reformatted to drive quality improvement in routine practice.
There is substantial interest in the use of image analysis programmes to assist standardisation of reporting and introduction of novel parameters where accurate assessment by pathologists is unfeasible. Standardisation of ER and PR staining interpretation in breast cancer has received considerable interest as inter‐observer variation can directly impact patient care. Although expert pathologist concordance in stain interpretation was greater in comparison to machine assessment in a large study of pooled TMA cores , equivalent concordance was seen for HER‐2 assessment. Interestingly, a small number of extreme discrepancies were identified by digital image analysis: digital image analysis screening could be initially used to highlight isolated reporting discrepancies.
Automated image analysis also allows simultaneous scoring of ER and PR restricted to tumour‐rich areas only, as reported in and additional studies demonstrate non‐inferiority to manual HER‐2 scoring in breast cancer .
FDA and CE IVD‐cleared algorithms exist for the assessment of a small number of pathological features, such as ER, PR, HER‐2 and Ki‐67 expression in breast cancer. Where image analysis algorithms are used in clinical trials, an appropriate evidence base for the algorithms must be available. A separately published validation dataset describing the performance of the algorithm in measuring the variable assessed would be regarded as a minimum requirement.
Pre‐analytical staining variation and background staining are key problems in image analysis development. Considerable variation in immunohistochemical staining and standard H&E staining can exist between laboratories (although image analysis can control for variation in staining intensity) . When constructing image analysis applications, demonstrating reproducibility in comparison to standard histopathological assessment is an important requirement. Algorithm – pathologist correlation is the most commonly used method: the use of multiple pathologists is always preferable to reflect routine practice. Clear descriptions must be provided of quality control measures and validation steps for every trial where image analysis is used. This should include a careful description of algorithm validation. Measures of reproducibility should be provided in publications such as pathologist‐algorithm correlation and measures of inter‐pathologist variability.
Regions of interest (ROI) selection methodology should be transparently described (if applicable) and whether analysis was on ROI, hot spots, WSI or was based on a pre‐selected sample (e.g. TMA spot) in order for it to be reliable and reproducible. Selection of ROI or hot spots can be completely automated, completely manual, or a combination of both. The approaches are subject to alternative potential errors, which can impact on the study design. For example, acquiring data or applying scoring systems optimised for visual interpretation and performing an AI protocol without appropriate validation is inappropriate.
There is considerable inter‐observer variability between pathologist assessment of percentage tumour cells within lung and colorectal cancer biopsies. Inaccurate estimations of tumour cellularity have the potential to result in inaccurate reporting of key actionable variants in genes such as EGFR, RAS and BRAF. Utilisation of the TissueMark™ (Philips Pathology, The Philips Centre, Guildford, Surrey, UK) platform to automatically annotate tumour boundaries with assessment of percentage tumour cells indicated superior performance compared to manual assessment . Standardisation of tumour sampling allows more refined molecular categorisation of tumours and facilitates stratified therapeutic approaches.
The use of digital pathology to provide novel information from histological samples is an exciting area of development. Rapid quantification and high reproducibility can be achieved by using digital image analysis, such as with localisation and quantification of immune cell infiltrates. The quantity and localisation of T cells is the underlying basis for the Immunoscore™ (Laboratory of Integrative Cancer Immunology INSERM, Paris, France) technique in colorectal adenocarcinoma: patients with a low CD3+ and CD8+ T cell density in the tumour centre and invasive margin are at an increased risk for disease relapse . This marker could be used to stratify adjuvant therapy in prospective randomised control trials in Stage II disease, where the absolute benefit of adjuvant cytotoxic therapy is small . Novel digital morphometric signatures previously not established in the pathology community can also be mined and linked to clinical outcome, as shown recently in the case of breast cancer and stage II colorectal adenocarcinoma .
Machine learning methods can facilitate accurate quantitative assessment of digital images at a performance level potentially exceeding that of human observers. The identification of cancer cells, stromal tissue or inflammatory cells typically requires accurate ground‐truth training datasets and large numbers of cases to provide optimal automated assessment. Large openly available pooled digital datasets and the establishment of Grand Challenges for computational biologists provide a means to benchmark and evaluate image analysis algorithms. The CAMELYON challenge to identify lymph node metastases demonstrates the advantage of this approach . Large‐scale interventional studies provide a relatively accessible source of suitable cases for retrospective analysis, with clear management and outcome data. This allows assessment of multiple image features by multiple image analysis programmes with subsequent univariate and multivariate analysis to identify the most relevant novel predictive and prognostic parameters or correlation with molecular markers.
Retrospective analysis of 768 pre‐treatment biopsies taken as part of a large RCT investigating neo‐adjuvant therapeutic approaches in breast cancer (neo‐tAnGo) identified median lymphocyte density as being independently predictive of complete pathological response to therapy on multivariate analysis . This study employed machine learning methods to classify cells as carcinoma, stroma or lymphocytes, with clear description of how quality metrics involved in analysis were determined, such as the use of automated ROI. Notably, the study was able to quickly evaluate initially promising parameters and demonstrate lack of a predictive utility in an adequately powered study. The adherence to good scientific practice by providing detailed description of techniques used and making the source code and images utilised available allowed transparency and facilitated reproducibility. Establishing such clear and transparent standards in image research practice is to be commended: maintaining algorithms and source images open to scientific scrutiny as opposed to keeping methodology closed to safeguard commercial interests can be anticipated as a key issue for the developing Digital Image Analysis community.
The discussion of applying advanced data analysis technologies to clinical trial data is by no means limited to image analysis tasks. Scott Gottlieb, Commissioner of Food and Drugs, remarks that ‘AI holds enormous promise for the future of medicine, and we're actively developing a new regulatory framework to promote innovation in this space and support the use of AI‐based technologies’. He clearly recognises the need for including advanced digital health tools into the drug development pipeline. The Precertification Pilot Programme (Pre‐Cert) provides new regulatory guidelines for permitting the use of digital technologies that include machine learning.
Central laboratory image analysis should be considered in study design due to advantages in standardisation of results. Although image analysis provides inherent reproducibility, inter‐platform variation between individual centres is a potential source of bias. Construction of SOPs with designated acceptable platforms, image analysis algorithms and validation steps ensuring inter‐site reproducibility would address this issue. Where companion diagnostic techniques reliant on specific staining platforms and accompanying image analysis platforms are required, adherence to manufacturer guidance would require clear documentation. Provision of unstained slides for immunohistochemical analysis and concurrent image analysis may be required as an additional quality assurance measure to account for inter‐laboratory staining variation. Ensuring both reproducible staining quality and image analysis between multiple laboratories could prevent centres participating in studies. Although centralised image analysis overcomes these issues, logistical constraints inherent to traditional studies undertaking centralised review remain. If many sites are conducting image analysis, repeatability analysis should be undertaken because of potential inter‐laboratory reproducibility concerns .
Slide scanners and image analysis algorithms when intended for medical use (including diagnosis) are classed as medical devices . The regulatory requirements for clinical performance studies of in vitro diagnostics (IVDs)/use of IVDs in clinical trials of medicinal products remains a subject of much debate and are fast evolving. The US FDA is testing a new Pre‐Cert model with the intention of demonstrating by premarket review and excellence appraisal the same quality of information as a traditional approach to ensure safety and effectiveness standards are met. This novel model for the pre‐market review of digital health tools as medical devices includes implementing a new approach to the review of AI tools.
In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) regulates medicines, medical devices and blood components for transfusion in the UK. To ensure GCP compliance, the MHRA carries out inspections of trial sites (including laboratories) mostly based on a risk assessment score . How oversight of clinical trials utilising these technologies as medical devices will be regulated or inspected is a subject of debate. Constantly evolving AI applications pose additional challenges, but there are regulatory cleared AI algorithms in clinical practice.
To the best of our knowledge, there are no guidelines covering the use of digital pathology or image analysis in clinical trials. The low yield of clinically actionable biomarkers from a large volume of research studies with considerable resource outlay led to the construction of the REMARK recommendations for biomarker studies in 2005 . Studies that fail to meet these consensus requirements for a reputable biomarker study are increasingly excluded from systematic reviews of the evidence base for diagnostic approaches. A recent systematic review evaluating prognostic biomarker use in oesophageal adenocarcinoma demonstrated the effect of applying REMARK guidelines as inclusion criteria . Only 36 out of 214 eligible studies (17%) were included.
Constructing similar robust recommendations for digital pathology applications would minimise the extensive waste of resources encountered in prior biomarker studies. Failure of reproducibility driven by a lack of reporting of experimental detail has been described as a major factor in the inefficient development and adoption of clinically relevant biomarkers . Diligent reporting of the processes by which digital pathology applications were validated is essential to avoid repeating similar research practice failures. Direct adoption of existing biomarker guidelines could be considered: parameters assessed by digital pathology certainly meet the criteria of a biomarker as a ‘defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes or responses to an exposure or intervention, including therapeutic interventions’ .
However, the application of guidelines intended for use in biospecimen‐derived biomarkers has been considered unsatisfactory when applied to imaging techniques, leading to the construction of separate guidelines for ‘imaging biomarkers’ . Synthesis of the most appropriate approaches from both biospecimen‐derived and imaging biomarkers would be logical for digital pathology applications.
The Health Research Authority (HRA) have recently updated their guidance on approval of new medical devices including software applications. The use of diagnostic platforms would require dedicated medical device clinical evaluation studies to obtain HRA approval, which would entail using systems in parallel with standard practice using light microscopy. Establishing the equivalence/non‐inferiority of a digital workstream for standard diagnostic practice is being undertaken in several UK centres with supporting guidance from the Royal College of Pathologists. The adoption of clinical studies using digital platforms would require the use of flagged research ethics committees to provide expertise in digital pathology. Lack of appropriate specialist availability to participate in the ethical approval process could hinder digital pathology development. Once standard diagnostic practice in clinical trials can be facilitated by digital platforms, approval of digital image analysis applications would be feasible. It is anticipated from joint HRA/MHRA guidance on software development that IVD performance evaluations would be undertaken in line with existing biochemical biomarker assays. Both industry and academic sponsors may find provision for clinical trial insurance to indemnify against diagnostic error induced by digital image analysis platforms challenging. It is uncertain whether actuarial assessment can be based on digital image use in a healthcare system where digital image analysis is more widely adopted, such as the USA. Combined assessment approaches of Pathologist plus machine may provide complex indemnity and regulatory challenges: appropriate mentoring from radiologists and industry specialists involved in digital radiology would be recommended to overcome these issues.
In this paper, we describe the use and potential future applications of digital pathology and image analysis technologies in clinical trials. The technologies can play a role in central review, training and image analysis and can be used to improve assessment of standard pathological features or extract novel insights. The current landscape sees the technologies most commonly being used where feasibility is already demonstrated such as central review or quality/efficiency benefits are obvious such as quantification of immune infiltrates in immune‐oncology trials.
In order to realise the potential of these technologies and dramatically improve the quality of the pathology input to clinical trials, the digital pathology community together with regulators and industry must establish practice standards for clinical trial use. Linking with international centres of excellence and involvement of other specialists such as software engineers, information network specialists is vital.
Table lists the key issues that require consensus for the adoption of digital pathology and image analysis in clinical trials, and CM‐Path proposes to address these issues in a future multidisciplinary workshop.
We acknowledge the helpful advice provided by Daniel O'Connor and Steven Lee from the MHRA regarding current regulatory issues regarding digital image analysis applications in the UK. We also acknowledge contributions from members of the NCRI CM‐Path Quality Assurance Panel.
NCRI CM‐Path Quality Assurance Panel: Owen J Driskell, Department of Clinical Biochemistry, University Hospitals of North Midlands, Stoke‐on‐Trent, Staffordshire, UK; Institute for Applied Clinical Sciences, University of Keele, Stoke‐on‐Trent, Staffordshire; UK National Institute for Health Research Clinical Research Network West Midlands. Andy Hall, Newcastle University, Newcastle upon Tyne NE2 4BW, UK. Jacqueline James, School of Medicine, Dentistry and Biomedical Sciences, Centre for Cancer Research and Cell Biology, Institute for Health Sciences, Queens University Belfast, Belfast, UK. Louise J Jones, Centre for Tumour Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, London, UK. Clare Craig, Genomics England, London, UK. Philip Sloan, Department of Cellular Pathology, Newcastle upon Tyne Hospitals NHS Trust, Newcastle upon Tyne, UK. Gareth J Thomas, Faculty of Medicine Cancer Sciences Unit, Southampton University, Somers Building, Southampton, UK. Philip Elliott, Centre for Tumour Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, London, UK. Maggie Cheang, Institute of Cancer Research Clinical Trials and Statistics Unit, The Institute of Cancer Research, 15 Cotswold Road, Surrey SM2 5NG, UK. Manuel Rodriguez‐Justo, Department of Gastroenterology, University College Hospitals London, London, UK. Gabrielle Rees, Department of Cellular Pathology, John Radcliffe Hospital, Oxford, UK. Manuel Salto‐Tellez, Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, UK. Nicholas P West, Pathology and Tumour Biology, Leeds Institute of Cancer and Pathology, University of Leeds, Leeds, UK. Ilaria Mirabile, Experimental Cancer Medicine Centres (ECMCs) Network, London, UK. Emily Howlett, Precision Medicine Team, Cancer Research UK, London, UK. Laura Stevenson, Institute of Cancer Research Clinical Trials and Statistics Unit, The Institute of Cancer Research, 15 Cotswold Road, Surrey SM2 5NG, UK. Maria da Silva, Sectra UK. Newton ACS Wong, Department of Cellular Pathology, Southmead Hospital, Bristol, UK. Sidonie Hartridge‐Lambert, Bristol‐Myers Squibb, London, UK. Joseph M Beecham, Nanostring, Washington, USA. Stephanie Traub, Centre for Drug Development, Cancer Research UK, London, UK. Sidath Katugampola, Centre for Drug Development, Cancer Research UK, London, UK. Sarah Blagden, Department of Oncology, University of Oxford, Oxford, UK. James Morden, Institute of Cancer Research Clinical Trials and Statistics Unit, The Institute of Cancer Research, 15 Cotswold Road, Surrey SM2 5NG, UK.
Ethics approval was not required as no study subjects were used, and no patient data was accessed. This article is the opinion expressed by all authors regarding the development of Digital Pathology Technology for Clinical Trial Use.
RP, KO, MR, DS and CV conceived the outline of the paper and areas for discussion; additional oversight/commentary was provided by the CM‐Path QA Panel and HP, JR, NR. All authors were involved in writing the paper and had final approval of the submitted and published versions. This manuscript has been read and approved by all the authors, the requirements for authorship have been met and each author believes that the manuscript represents honest work.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Digital pathology and image analysis potentially provide greater accuracy, reproducibility and standardisation of pathology‐based trial entry criteria and endpoints, alongside extracting new insights from both existing and novel features. Image analysis has great potential to identify, extract and quantify features in greater detail in comparison to pathologist assessment, which may produce improved prediction models or perform tasks beyond manual capability. In this article, we provide an overview of the utility of such technologies in clinical trials and provide a discussion of the potential applications, current challenges, limitations and remaining unanswered questions that require addressing prior to routine adoption in such studies. We reiterate the value of central review of pathology in clinical trials, and discuss inherent logistical, cost and performance advantages of using a digital approach. The current and emerging regulatory landscape is outlined. The role of digital platforms and remote learning to improve the training and performance of clinical trial pathologists is discussed. The impact of image analysis on quantitative tissue morphometrics in key areas such as standardisation of immunohistochemical stain interpretation, assessment of tumour cellularity prior to molecular analytical applications and the assessment of novel histological features is described. The standardisation of digital image production, establishment of criteria for digital pathology use in pre‐clinical and clinical studies, establishment of performance criteria for image analysis algorithms and liaison with regulatory bodies to facilitate incorporation of image analysis applications into clinical practice are key issues to be addressed to improve digital pathology incorporation into clinical trials.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Nuffield Department of Surgical Sciences, University of Oxford, and Oxford NIHR Biomedical Research Centre, Oxford, UK
2 Institute of Cancer Sciences – Pathology, University of Glasgow, Glasgow, UK
3 Centre for Oral Health Research, Newcastle University, Newcastle upon Tyne, UK
4 Strategy and Initiatives, National Cancer Research Institute, London, UK
5 Department of Computer Science, University of Warwick, Warwick, UK
6 Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK