Building a Breast Cancer Learning Health System

Date:
Authors:
Share publication:

Introduction: Data generated through day-to-day clinical practice hold valuable clinical insights describing the impact of treatments on patient outcomes. This was recognized by the Institute of Medicine through describing the Learning Healthcare System (LHS); a continuous cycle or feedback loop in which scientific evidence informs clinical practice, while data gathered from clinical practice and administrative sources inform care and scientific investigation. In reality, the latter part of the loop is often missing, and such information is not readily available. Clinical notes are a potential treasure trove of health information but data is often siloed and/or unstructured, making it unsuitable for analysis with conventional statistical techniques. Existing methods of capturing patient experience and outcome data (e.g., manual retrospective chart review) are time-consuming and are limited by the scale and quality of data that can be captured. This research aims to develop a data platform to enable a LHS by applying Artificial Intelligence (AI) to clinical documents to characterize the clinical course of breast cancer patients.

Methods: This study was conducted at the Hamilton Health Sciences (HHS) Juravinski Cancer Centre (JCC), which provides treatment and patient support services to a population of 2.5 million people in southern Ontario. Breast cancer patients seen at the JCC between 2014 and 2018 with at least two years of follow-up were included in the study cohort. The data platform was composed of structured and unstructured data which was extracted using Microsoft SSIS from 5 repositories: Meditech (imaging results, laboratory results, hospital notes and discharge summaries), Mosaic® (JCC outpatient clinic notes, radiation prescriptions), Hamilton Regional Laboratory Medicine Program (pathology), Edmonton Symptom Assessment Scale (toxicity and quality of life), and OPIS (chemotherapy). An AI engine, DARWEN™ was deployed within the JCC to automate data abstraction from unstructured clinical patient documents. DARWEN™ is an AI extraction system, where extraction models are tuned and tested against a reference standard based on manual chart review, with clinical consultation to resolve semantic issues and clinical complexities. Data abstracted by the AI engine was populated alongside structured data extracts into a longitudinal patient-oriented data warehouse, updated on a nightly basis. We used two strategies for AI quality assurance: 1) for variables where structured data was not available for any patients, we compared AI output to manual chart abstraction by two cancer surgeons; 2) for variables where structured data was available on some patients, but not all, AI was used to fill in the gaps. For example, for tumor biomarkers, estrogen receptor (ER), progesterone receptor (PR) and HER2, structured data was available via synoptic report for a portion of patients and AI was used to fill in for patients who for whom a synoptic report was not available – so for these we compared AI to the synoptic report.

Results: The cohort consisted of 2339 patients, who were primarily female (n=2320) with a median age of 61 (range: 24-97). Of these patients, 1502 (63.2%) had breast conservation surgery, 840 (35.2%), a mastectomy, and 527 (22.2%) underwent modified radical mastectomy. In addition, 1175 (49.4%) patients underwent either an axillary node dissection or sentinel node biopsy, of whom 436 (37.1%) were positive (320 (27.2%) with 1- 3 positive nodes and 116 (9.9%) with ≥4 positive nodes. AI accuracy varied somewhat based on data type. For example, quality assurance of AI demonstrated an F1 score of 0.95 for ER status (n=1094), 0.92 for PR status (n=1094), and 0.83 for HER2 status (n=946). For the QA done by surgeons, the manual chart abstractions took a mean of 20 minutes, suggesting manual review of the entire cohort would take ~800 hours. In contrast, once the initial development of the AI models was complete, the AI processing of all 2339 patient records was completed in ~8 hours running on a 4 core Intel Gold 6248 CPU @ 2.50 GHz server. Upfront time savings using AI were relatively modest due to the time required for model development and tuning, but the ongoing savings are considerable: a subsequent data extraction for the 3,464 new patients seen at the JCC between 2019 and June of 2022 was completed in ~12 hours.

Conclusion: This study has demonstrated that it is possible to automate the integration of AI-extracted clinical data derived from documents across patient records to support a functional LHS. This comprehensive system will empower clinicians to leverage high-quality real-world data to supplement clinical decision-making and research efforts.

🏆 Pentavere Wins 2024 Prix Galien Award for Best Digital Health & Medical Technology Startup!
Heart Failure Patient Identification & Optimization Program Powered by DARWEN™ AI