Introduction
Observational medical databases allow healthcare systems to electronically measure how well the care of patients conforms to clinical guidelines [1]. The retrospective measurement of clinical guideline adherence previously required manual chart review, a time-consuming and subjective process [2]. Electronic reviews (e.g., database queries) of guideline adherence, in contrast, generally take less time, can be easily modified, repeatedly executed, and scaled to larger sample sizes with minimal additional effort [3–5].
A limitation of electronic reviews as opposed to manual reviews is that they often evaluate less complex medical scenarios [6]. Diverse technologies have attempted to address this limitation. Natural language processing (NLP) can convert critical data elements and context-specific medical decisions into a structured form [7, 8]. Guidelines Interchange Format (GLIF), Guidelines Element Model (GEM) and other projects express complex clinical guidelines in a computable format [9–12]. Even with these tools, the analysis of complex medical scenarios continues to present a challenge.
Decision pathways are a method to diagram a complex medical scenario, such as the Center for Disease Control and Prevention’s (CDC) HIV Diagnostic Testing Guidelines, which recommend a hierarchical sequence of testing to diagnose HIV [13]. Rather than assess adherence to each decision in the hierarchy, authors typically review a single decision point. For example, Cane et. al. reviewed HIV resistance testing in patients with a low HIV viral load [14]. Improved analysis methods would promote the review of an entire medical scenario, rather than a single decision, for adherence to medical guidelines.
Graph theory provides a mathematical construct capable of modeling complex decision pathways [9, 15]. A graph, according to graph theory, consists of nodes, commonly represented as circles, connected by edges, commonly represented as lines. Graphs in which the edges denote a path to be followed are termed directed graphs and their edges are represented as arrows. Attributes of the graph can represent additional information: a line’s style can represent proper or improper adherence to current guidelines (e.g., solid line for adherence, dashed line for non-adherence), and the thickness of the line can represent the utilization frequency of an edge.
To facilitate the use of observational medical databases to evaluate guideline adherence in a complex medical scenario, a method involving graph theory is introduced. Using graph theory, we created a model of the CDC’s HIV Diagnostic Testing Guidelines to evaluate the adherence of clinicians to HIV diagnostic testing recommendations provided by the CDC, identify intervention targets, and suggest an appropriate intervention strategy.
Methods
We developed a method to assess adherence to HIV testing guidelines in a large healthcare system by leveraging historical electronic health record (EHR) data. We assessed the CDC’s HIV Diagnostic Testing Guidelines by modeling them as a directed graph. For each patient we created a directed graph of their HIV testing. A patient’s graph begins at the start node and travels along the graph’s edges to reach the node of the next test performed in chronological order. The set of patient-level graphs were aggregated into a summary graph of all HIV testing sequences performed within our healthcare system. Finally, we compared these two graphs, the first representing the CDC’s recommended approach to HIV diagnostic testing and the second graph representing the testing patterns we found in our healthcare system. The comparison allowed us to assess for deviations from the recommended guidelines. The Supplement and online code repository contain more technical details (e.g., step-by-step examples, data structures, algorithms) (S1-S6 Figs in S1 File, S1-S7 Tables in S1 File) [16].
Modeling of HIV diagnostic testing guidelines as a graph
Recommended approach to HIV diagnostic tests.
The CDC publishes guidelines for HIV Diagnostic Testing [13]. They recommend an HIV-1/2 antigen/antibody combination immunoassay to screen for HIV followed by a confirmation test, an HIV-1/HIV-2 antibody differentiation immunoassay. They recommend following a negative or indeterminate confirmation test with an HIV-1 nucleic acid test (NAT). The CDC no longer recommends the HIV-1 Western blot, a test previously used for diagnosis. Except in unusual circumstances, only patients with confirmed HIV should receive HIV resistance tests or HIV NAT. (See S8 Table in S1 File for additional details about HIV diagnostic tests).
Representation of guidelines as a graph.
As a first step to measuring adherence, the guidelines were modeled as a graph (Fig 1A). Nodes in the graph represented either an HIV test or the result of a specific HIV test, such as an HIV resistance test or a positive HIV screen, respectively. Edges in the graph connected tests recommended to be performed in sequence, with edges pointing from one test in the sequence to the next. For example, an edge connects a positive HIV screen node with a positive HIV confirmation node. The graphical model of the guidelines balanced the intent of the guidelines with modifications to simplify the measurement of adherence.
[Figure omitted. See PDF.]
A. Modeling of HIV Diagnostic Testing Guidelines as a Graph. Arrows connect HIV tests that should occur chronologically according to guidelines. The arrow points from the first test to the second. B. Nonadherence to HIV Diagnostic Testing Guidelines as a Graph. Solid lines denote observed adherence to guidelines (See Fig 1A). Dashed lines denote observed nonadherence to guidelines. The graph shows the 11 nonadherent edges with at least 1,000 observations, which comprise 84% (54,149/64,405) of the total nonadherent observations. Line thickness of dashed lines denotes the number of nonadherent tests. -, the test result is negative; +, the test result is positive; Screen, HIV-1/2 antigen/antibody combination immunoassay; confirm, HIV-1/HIV-2 antibody differentiation immunoassay; NAT, HIV-1 nucleic acid test (NAT); Resistance, HIV resistance test.
The graph intentionally contains a start node and an end node. The edge from the start node to the HIV screen node conveys the guideline’s recommendation to perform this test first. Similarly, the edges to the end node denote which tests the guideline considers appropriate to stop the diagnostic workup. After a negative HIV screen, for example, it is appropriate stop the HIV workup, so an edge exists between the negative HIV screen node and the end node. In contrast, a patient lost to follow-up after a positive HIV screen would not adhere to the guidelines because they would benefit from an HIV confirmation test. The graph conveys this by the absence of an edge between a positive HIV screen and the end node. The inclusion of the start and end nodes allows the graph to contain important edges used later in the analysis.
HIV resistance tests are not diagnostic for HIV, but they are still included in the graph. After an HIV diagnosis is confirmed, the guidelines recommend the use of HIV resistance tests [17]. This explains the edge from a positive HIV confirmation to an HIV resistance test. Resistance tests are not a recommended part of HIV diagnosis, but researchers have documented its inappropriate utilization [14]. To differentiate between appropriate and inappropriate utilization of the HIV resistance test, we included it, and the edges denoting adherence to the guidelines, in the graph.
The graph permits repeated HIV diagnostic workups. For example, a patient with a negative HIV screen may, perhaps after a potential exposure, undergo a second HIV screen, also with a negative result. The graph models this scenario by a “loop”, a special type of edge that points to the same node from where it originated. A loop can be found at the negative HIV screen node in the graph (Fig 1A).
Modeling of patient HIV testing as a graph
Data source.
Study data originates from the Veterans Health Administration’s (VA) Corporate Data Warehouse (CDW), a relational database that aggregates medical data, including laboratory results, from 130 separate healthcare facilities [18]. These healthcare facilities are located across the continental United States, Alaska, Hawaii, and the Philippines. Nearly all facilities contributed identifiable data for the full duration of the study. We identified HIV laboratory tests and standardized their results, including checks for manual data entry errors, with previously published methods [19, 20].
The VA healthcare system maintains an HIV registry that contains a list of patients with known HIV, including their date of diagnosis. The HIV registry helped define the study population.
Study population.
The study population consists of patients that underwent HIV diagnostic testing at VA facilities between January 2015 to January 2019 inclusive (Fig 2). We excluded patients with known HIV because diagnostic testing guidelines did not apply to them. For example, a patient with known HIV may appropriately receive a test for HIV resistance at their first visit to our healthcare system.
[Figure omitted. See PDF.]
Comparison of patient HIV testing to guidelines
Data analysis.
To evaluate the adherence of HIV testing to guidelines, we assembled a directed graph to summarize the sequential HIV testing performed on patients within our healthcare system. We began by arranging the HIV tests performed for each patient in chronological order. Next, the chronologically arranged HIV tests were converted to edges, where the edge began at the first test and pointed to the next. Each patient had an edge from the start node to the first HIV test they received. Likewise, each patient had an edge from the last HIV test they received to the end node. The list of edges was aggregated to count the number of occurrences of each edge (e.g., an edge from the start node to negative HIV screen node occurred 10 times). The output, a table of edges and a count of their occurrence, was converted to a directed graph.
To determine adherence to the HIV Diagnostic Testing Guidelines, each edge was classified as adherent or nonadherent to the guidelines. We denoted the adherence of an edge to the guidelines by line style. An edge drawn with a solid line represented adherence, while an edge drawn with a dashed line represented nonadherence.
The source code (i.e., C#, T-SQL) to conduct the analysis is available online [16]. Gephi and Inkscape were used to draw the graphs. S7 Fig in S1 File shows the data flow and manipulation.
Determination of significant findings.
To verify examples of nonadherence to the guidelines discovered through the graphical model, we manually reviewed patient charts. Fifty patients were reviewed for each type of nonadherence. Disagreements were adjudicated by HIV subject matter expert consensus. Inter-annotator agreement was reported by Cohen’s kappa.
Consideration of absolute time
To construct a clinically meaningful model of guideline adherence with a directed graph, an important consideration is absolute time, in addition to the chronological sequence of tests (e.g., test 1 → test 2). By absolute time we mean the elapsed time measured in, for example, seconds or minutes, between one test and another. We modified the graphical model of guideline adherence to account for absolute time and improve its clinical interpretation. These considerations are detailed in the S1 File.
Cost estimation
The Center for Medicare and Medicaid Services’ (CMS) 2018 Clinical Laboratory Fee Schedule provides costs for HIV tests [21]. The cost of nonadherent tests was determined by counting the number of individual tests which the graph identified as nonadherent to the guidelines.
Results
Study population description
Over 3.855 million patients underwent HIV testing in our healthcare system between 1999 and 2019 (Fig 2). The initial years (1999–2014) were used to determine patients with HIV prior to the as part of the registry of HIV patients. The later years (2015–2019), contained 1.643 million patients who underwent HIV diagnostic testing after our healthcare system implemented the 2014 CDC Guidelines. The demographics of these patients are included in Table 1. These patients received care, including 8.790 million HIV diagnostic test results, at 130 facilities.
[Figure omitted. See PDF.]
The population totaled 1,643,149 patients.
Assessment of adherence to HIV testing guidelines
The graphical analysis of the study population’s HIV testing produced 331 unique edges (14 adherent, 317 nonadherent). Many of the edges occurred infrequently with only 14 edges (3 adherent, 11 nonadherent) having over 1,000 occurrences (Fig 1B). On review of the nonadherent edges by test (e.g., HIV NAT), we found three recurring scenarios: (1) HIV NAT with or without an HIV screen, (2) HIV resistance testing used in place of an HIV screen, and (3) the performance of a confirmation test after a negative HIV screen. Cohen’s kappa for interrater agreement of adherence to guidelines was 0.78 (97% agreement; 97/100).
Scenario 1: HIV nucleic acid tests (NAT).
On manual chart review of the nonadherent edges involving HIV NATs, we found the majority (86%, 43/50) represented true nonadherence because the HIV NAT was used in combination or in lieu of the HIV screen. Specifically, we did not find evidence to sufficiently explain the utilization of HIV NATs such as (1) the appropriate use of HIV NAT to diagnose acute HIV, (2) HIV NAT as a follow-up to a negative HIV confirmation test, or (3) HIV NAT performed in a patient with existing HIV. Of the 11 nonadherent edges with over 1,000 observations in the graph, 7 edges involved the HIV NAT. These edges had 23,728 occurrences from orders placed by 9,927 clinicians representing all 130 facilities.
Scenario 2: HIV resistance test used in place of an HIV screen.
On review of the nonadherent edges involving HIV resistance tests, we also found the majority (100%, 50/50) represented true nonadherence. The patients reviewed did not have existing HIV, as recommended by the guidelines. The one nonadherent edge with over 1,000 observations in the graph pointed from the start node to the HIV resistance node. This indicates an HIV resistance test was the first HIV test performed in these 1,644 patients. A total of 56% (73/130) of facilities had at least one observation of this edge. A few facilities accounted for 61% (1002/1644) of the observations, and clinicians who placed these orders could be identified within these facilities. When these clinicians were contacted by phone, they erroneously believed the HIV resistance test was the HIV screen and agreed to modify their HIV test ordering practices.
Scenario 3: combined HIV screen and confirmation tests.
Review of patient’s medical charts with an HIV screen and confirmation test performed together revealed many of these patients had received a 5th generation HIV screen. This test, in contrast to the 4th generation test, combines the HIV screen and HIV confirmation into a single test. The analysis classified this scenario as nonadherent because we did not anticipate it prior to our review of patient charts, but after review, we believe it represents adherence to guidelines.
The cost of guideline nonadherence
We identified the total number of nonadherent HIV tests: 16,567 (86% of 19,264) HIV NAT and 3,007 (100% of 3,007) HIV resistance tests. The CMS cost per test was $94.55 for the HIV NAT and $286.05 for the HIV resistance test. In total, the estimated cost of nonadherent testing was $2.427 million in 2018 United States dollars.
Discussion
This graphical model uses a directed graph to model guidelines, an idea shared with the GuideLine Interchange Format (GLIF) and others [9, 22]. It also relies heavily on temporal relationship, which other authors have studied in depth in a medical context [23]. Unlike previous graphical methods, our graphical model (1) evaluates guideline adherence within a population instead of at the individual level, (2) assesses nonadherent clinical practice, in addition to adherent clinical practice, and (3) quantifies the impact of the observed nonadherence while identifying targets for intervention.
The examples of nonadherence from our analysis convey important lessons for reviews of guidelines adherence. First, the strategy to reverse nonadherence may originate from the review itself. Nonadherence limited to relatively few facilities or clinicians suggests a simple intervention (e.g., phone call) to reverse course, such as with nonadherent HIV resistance tests (Figs 3 and 4). Nonadherence to HIV NAT affected a larger proportion of the health system and required a more intensive intervention (e.g., systemwide campaign). Second, the availability of structured data does not obviate the need for manual chart review. Through manual chart review, we identified facilities utilizing the 5th generation HIV test, which we did not consider in the model.
[Figure omitted. See PDF.]
Total tests performed at outlier facilities before and after an intervention (gray; continuous piecewise linear spline). The red vertical line represents the timing of the intervention.
[Figure omitted. See PDF.]
The most common shared edges are shown (A) before (A) and after (B) the intervention.
Strengths of modeling guideline adherence as a graph
Modeling guideline adherence as a graph has multiple strengths. First, the graph is easy to understand. The method does not involve advanced mathematics (e.g., algebra, calculus) [24]. Clinicians generally appreciate the graph as a model of the expected (Fig 1A) and observed (Fig 1B) pattern of diagnostic testing for HIV after a brief orientation.
Second, although simple to understand, the graph can represent a complex process. Non-recommended diagnostic tests (e.g., HIV Western blot), in addition to the recommended tests, are included in the graph of expected HIV diagnostic testing. It also conveys inappropriate tests to start and stop a diagnostic workup. For example, an HIV diagnostic workup should not end after a positive HIV screen, an HIV confirmation is needed. Finally, the graph models repeated workups, such that a patient could become HIV positive after a negative HIV screen because an arrow points to a positive HIV screen from a negative HIV screen (Fig 1A).
Third, the process scales easily to large populations. Our healthcare system is the largest integrated healthcare system in the United States, and we conducted this analysis on a commodity desktop PC. Most applications will not require expensive computer hardware and may be repeatedly run as part of a plan-do-check-act (PDCA) cycle to support iterative quality improvement. Even as it scales to large populations, it remains highly specific, identifying individual clinicians who performed inappropriate HIV tests in a healthcare system with over 10,000 clinicians.
Limitations of modeling guideline adherence as a graph
We encountered certain difficulties when modeling guideline adherence as a graph. First, we had to balance graph accuracy with interpretability. For example, the consideration of absolute time created additional nodes and edges, increasing the complexity of the graph. (See S1 File–Consideration of Absolute Time.) The incorporation of absolute time increased the model’s accuracy, so we tolerated its increased complexity. As a second example, we chose to exclude indeterminate HIV confirmation results from the model because they happened too rarely to provide a benefit.
Second, the development of graphical guideline adherence models may require iterative revision. On review of the current model, we became aware of 5th generation HIV testing, which performs the HIV screen and confirm in a single test. The current HIV diagnostic guidelines describe the sequential performance of an HIV confirmation only after a positive HIV screen. To distinguish between 5th generation HIV testing and the incorrect performance of an HIV confirmation after a negative HIV screen, the model would require a revision (i.e., a new node to represent a 5th generation HIV test).
In the future, we plan to build an interface between this data analysis method and a standardized observational data model [25].
Conclusion
We developed a graphical model to determine if complex medical scenarios adhered to established guidelines. The model applies to patient populations, rather than individuals. With an observational database as the input, we demonstrated the method via an electronic, retrospective review of HIV diagnostic testing in over one million patients. The method identified >20,000 occurrences of inappropriate utilization of the HIV NAT test and HIV resistance tests, which cost an estimated $2.427 million dollars. This led to systemwide changes in policy to reduce nonadherent orders and enhance detection of HIV. This approach is in no way specific to HIV and may be applied to diverse medical scenarios.
Supporting information
S1 File. Graphical analysis of guideline adherence to detect systemwide anomalies in HIV diagnostic testing.
https://doi.org/10.1371/journal.pone.0270394.s001
(DOCX)
Acknowledgments
We would like to acknowledge Joanna Moran, who helped us obtain data related to the HIV registry.
Citation: Hauser RG, Bhargava A, Brandt CA, Chartier M, Maier MM (2022) Graphical analysis of guideline adherence to detect systemwide anomalies in HIV diagnostic testing. PLoS ONE 17(7): e0270394. https://doi.org/10.1371/journal.pone.0270394
About the Authors:
Ronald George Hauser
Roles: Conceptualization, Supervision, Writing – original draft, Writing – review & editing
E-mail: [email protected]
Affiliations Veterans Affairs Connecticut Healthcare System, West Haven, CT, United States of America, Department of Laboratory Medicine, Yale University School of Medicine, New Haven, CT, United States of America
https://orcid.org/0000-0002-7361-7162
Ankur Bhargava
Roles: Visualization, Writing – original draft, Writing – review & editing
Affiliations Veterans Affairs Connecticut Healthcare System, West Haven, CT, United States of America, Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States of America
https://orcid.org/0000-0002-9382-1355
Cynthia A. Brandt
Roles: Project administration, Writing – review & editing
Affiliations Veterans Affairs Connecticut Healthcare System, West Haven, CT, United States of America, Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States of America
Maggie Chartier
Roles: Resources, Writing – original draft, Writing – review & editing
Affiliation: Office of Specialty Care Services, Veterans Health Administration, Washington, DC, United States of America
Marissa M. Maier
Roles: Data curation, Resources, Writing – original draft, Writing – review & editing
Affiliations Veterans Affairs Portland Health Care System, Portland, OR, United States of America, Division of Infectious Diseases, Oregon Health and Sciences University, Portland, OR, United States of America
1. Centers for Disease C, Prevention. National HIV Testing Day and new testing recommendations. MMWR Morbidity and mortality weekly report. 2014;63(25):537.
2. Amukele TK, Baird GS, Chandler WL. Reducing the use of coagulation test panels. Blood Coagul Fibrinolysis. 2011;22(8):688–695. pmid:21934488
3. Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. J Am Med Inform Assoc. 2009;16(6):869–873. pmid:19717797
4. Erstad TL. Analyzing computer based patient records: a review of literature. J Healthc Inf Manag. 2003;17(4):51–57. pmid:14558372
5. Hanauer DA, Englesbe MJ, Cowan JA Jr., Campbell DA. Informatics and the American College of Surgeons National Surgical Quality Improvement Program: automated processes could replace manual record review. J Am Coll Surg. 2009;208(1):37–41. pmid:19228500
6. Hauser RG, Shirts BH. Do we now know what inappropriate laboratory utilization is? An expanded systematic review of laboratory clinical audits. Am J Clin Pathol. 2014;141(6):774–783. pmid:24838320
7. Patterson OV, Freiberg MS, Skanderson M, S JF, Brandt CA, DuVall SL. Unlocking echocardiogram measurements for heart disease research through natural language processing. BMC Cardiovasc Disord. 2017;17(1):151. pmid:28606104
8. Friedman C, Hripcsak G. Natural language processing and its future in medicine. Acad Med. 1999;74(8):890–895. pmid:10495728
9. Ohno-Machado L, Gennari JH, Murphy SN, et al. The guideline interchange format: a model for representing guidelines. J Am Med Inform Assoc. 1998;5(4):357–372. pmid:9670133
10. Shiffman RN, Karras BT, Agrawal A, Chen R, Marenco L, Nath S. GEM: a proposal for a more comprehensive guideline document model using XML. J Am Med Inform Assoc. 2000;7(5):488–498. pmid:10984468
11. Sordo M, Ogunyemi O, Boxwala AA, Greenes RA. GELLO: an object-oriented query and expression language for clinical decision support. AMIA Annual Symposium proceedings AMIA Symposium. 2003;2003:1012–1012.
12. Kalra D, Beale T, Heard S. The openEHR foundation. Studies in health technology and informatics. 2005;115:153–173. pmid:16160223
13. Branson BM, Owen SM, Wesolowski LG, et al. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Center for Disease Control and Prevention (CDC). 2014.
14. Cane PA, Kaye S, Smit E, et al. Genotypic antiretroviral drug resistance testing at low viral loads in the UK. HIV Med. 2008;9(8):673–676. pmid:18557948
15. Koller D. Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series) 1st Edition.
16. Hauser RG. Graphical Guidelines Source Code. https://github.com/hauserrg/GraphicalGuidelines. Published 2019. Updated 04/29/2019. Accessed 04/29/2019.
17. AIDSinfo (U.S. Department of Health and Human Services). Guidelines for the Use of Antiretroviral Agents in Adults and Adolescents with HIV. https://aidsinfo.nih.gov/guidelines/html/1/adult-and-adolescent-arv/0. Updated 10/25/2018. Accessed 05/17/2019.
18. Pham R. VistA Internals, CPRS, and The VA Corporate Data Warehouse. https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/2296-notes.pdf. Accessed February 28, 2019.
19. Hauser RG. Making Laboratory Data Accessible: Pitfalls and Solutions in working with CDW Data. VPR (Using Data and Information Systems in Partnered Research) Web site. http://www.hsrd.va.gov/for_researchers/cyber_seminars/archives/video_archive.cfm?SessionID=2375. Published 2017. Updated 17 Oct 2017. Accessed 02/08/2019.
20. Hauser RG, Quine DB, Ryder A. LabRS: A Rosetta stone for retrospective standardization of clinical laboratory test results. J Am Med Inform Assoc. 2018;25(2):121–126. pmid:28505339
21. U.S. Centers for Medicare & Medicaid Services. Clinical Laboratory Fee Schedule. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Published 2019. Accessed 04/29/2019.
22. Weiss SM, Kulikowski CA, Amarel S, Safir A. A model-based method for computer-aided medical decision-making. Artificial Intelligence. 1978;11(1):145–172.
23. Combi C, Shahar Y. Temporal reasoning and temporal data maintenance in medicine: issues and challenges. Comput Biol Med. 1997;27(5):353–368. pmid:9397339
24. Hauser RG, Jackson BR, Shirts BH. A bayesian approach to laboratory utilization management. J Pathol Inform. 2015;6:10. pmid:25774321
25. Stang PE, Ryan PB, Racoosin JA, et al. Advancing the science for active surveillance: rationale and design for the Observational Medical Outcomes Partnership. Ann Intern Med. 2010;153(9):600–606. pmid:21041580
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Background
Analyses of electronic medical databases often compare clinical practice to guideline recommendations. These analyses have a limited ability to simultaneously evaluate many interconnected medical decisions. We aimed to overcome this limitation with an alternative method and apply it to the diagnostic workup of HIV, where misuse can contribute to HIV transmission, delay care, and incur unnecessary costs.
Methods
We used graph theory to assess patterns of HIV diagnostic testing in a national healthcare system. We modeled the HIV diagnostic testing guidelines as a directed graph. Each node in the graph represented a test, and the edges pointed from one test to the next in chronological order. We then graphed each patient’s HIV testing. This set of patient-level graphs was aggregated into a single graph. Finally, we compared the two graphs, the first representing the recommended approach to HIV diagnostic testing and the second representing the observed patterns of HIV testing, to assess for clinical practice deviations.
Results
The HIV diagnostic testing of 1.643 million patients provided 8.790 million HIV diagnostic test results for analysis. Significant deviations from recommended practice were found including the use of HIV resistance tests (n = 3,007) and HIV nucleic acid tests (n = 16,567) instead of the recommended HIV screen.
Conclusions
We developed a method that modeled a complex medical scenario as a directed graph. When applied to HIV diagnostic testing, we identified deviations in clinical practice from guideline recommendations. The model enabled the identification of intervention targets and prompted systemwide policy changes to enhance HIV detection.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer