ReportsElectronic Medical Records

Electronic Medical Records for Genetic Research: Results of the eMERGE Consortium

See allHide authors and affiliations

Science Translational Medicine  20 Apr 2011:
Vol. 3, Issue 79, pp. 79re1
DOI: 10.1126/scitranslmed.3001807

Abstract

Clinical data in electronic medical records (EMRs) are a potential source of longitudinal clinical data for research. The Electronic Medical Records and Genomics Network (eMERGE) investigates whether data captured through routine clinical care using EMRs can identify disease phenotypes with sufficient positive and negative predictive values for use in genome-wide association studies (GWAS). Using data from five different sets of EMRs, we have identified five disease phenotypes with positive predictive values of 73 to 98% and negative predictive values of 98 to 100%. Most EMRs captured key information (diagnoses, medications, laboratory tests) used to define phenotypes in a structured format. We identified natural language processing as an important tool to improve case identification rates. Efforts and incentives to increase the implementation of interoperable EMRs will markedly improve the availability of clinical data for genomics research.

Introduction

Electronic medical records (EMRs) have been promoted as essential to improving healthcare quality (14). Although current adoption rates remain low, recent government efforts may markedly increase the use of EMRs in clinical settings (59). The U.S. Centers for Medicare and Medicaid Services recently finalized a definition for “meaningful use” of EMRs, which defines standards for the recording and use of data in EMRs to promote quality care (10, 11). This standard, coupled with significant financial incentives and penalties, is intended to promote widespread adoption of EMRs within the U.S. healthcare system.

Understanding the strengths and limitations of current EMR data capture is crucial for identifying linkages between disease susceptibility and clinical presentation. In clinical care, EMRs serve to document clinical observations and patient-provider interactions and generate billing documentation. Clinical data captured in EMRs may have a secondary application in the research setting. In parallel with increasing EMR adoption, high-throughput DNA sequencing has made available millions of DNA sequence reads for genetic investigations (12). Understanding the current feasibility of linking clinical data captured in EMRs and genome sequencing data has important implications for genetics research and the promise of personalized medicine (1315).

Genome-wide association studies (GWAS) require accurate classification of disease phenotypes to maintain adequate statistical power (16, 17). The Electronic Medical Records and Genomics Network (eMERGE) (18) aims to determine whether data captured through routine clinical care using EMRs can identify disease phenotypes with sufficient positive and negative predictive values (PPVs and NPVs) for application in GWAS. If successful, identification of disease phenotypes using EMR data may enable efficient and rapidly scalable genetic research. Specifically, greater efficiency may be gained by undertaking genome-wide single-nucleotide polymorphism (SNP) genotyping only once. EMR data may be used to determine whether the individual associated with each DNA sample is used as a case, a control, or neither for multiple phenotypes, facilitating a GWAS for each phenotype. The marginal cost of each GWAS after the initial genotyping expense is then limited to the costs involved with establishing and validating the operational EMR-based phenotype definition and the costs of performing the association analyses. Indeed, as genotyping costs continue to rapidly decline, efficient and cost-effective means to identify phenotypic data from EMRs takes on increasing importance (19).

However, it is unclear whether current EMR implementation captures clinical data adequately to identify patients for research aimed at identifying the genetic basis of disease susceptibility. The eMERGE consortium has a unique opportunity to evaluate the utility of current EMRs for genomic research and to identify key areas for improvement. Here, we determine whether data recorded in EMRs for routine clinical care at five U.S. study sites can be used to define phenotypes for genomic research, and discuss the challenges and lessons learned in using data extracted from existing EMRs for GWAS.

Results

We analyzed EMR data collected from five eMERGE study sites to identify cases with one of five different disease phenotypes: dementia, cataracts, peripheral arterial disease, type 2 diabetes, and cardiac conduction defects. Table 1 lists the primary phenotypes, biorepository description, and EMR characteristics for each study site. Three sites used an internally developed EMR system for both inpatient and outpatient care; the remaining two sites used commercial EMR systems. One site used different EMR systems for inpatient and outpatient care. Some EMR systems captured data primarily from free-text documents (unstructured data), and others from a mix of structured data collection and free-text notes. Three sites used robust but different natural language processing (NLP) tools to extract structured data from free-text reports (2025). Each study site had a separate DNA biorepository (linked to the EMR through a unique research identifier) to house biological samples for genotyping (2629). With a single exception, all sites used an opt-in consent model to recruit participants into the biorepository. For our purposes, we analyzed only patients with records in both the institution’s biorepository and EMR.

Table 1

Comparison of electronic medical records (EMRs) and biorepositories at five eMERGE institutions. GHC, Group Health Cooperative; NLP, natural language processing.

View this table:

Data required to define the clinical gold standard for the five selected disease phenotypes across study sites most commonly required only one category of data (for example, diabetes could be defined by laboratory tests alone, and peripheral arterial disease by a single radiological test), with one condition requiring two categories (Table 2). However, algorithms to identify the same phenotypes using EMR data required multiple categories of data, ranging from one to four categories (for example, diagnostic information, medications, and laboratory tests), with additional data categories required to identify covariates and exclusion criteria. In the example of type 2 diabetes, the EMR-derived phenotype required diagnoses, laboratory tests, and medications to identify likely type 2 diabetes cases and used diagnoses to specifically exclude cases of type 1 diabetes. All sites used demographic, diagnoses, and medication data in their phenotype definitions.

Table 2

Data categories for defining a clinical gold standard, an EMR-derived phenotype, and covariates and exclusion criteria.

View this table:

The three study sites using internally developed, text-based EMRs required significant NLP efforts to extract concepts from free-text documents, with each using a different NLP platform. At these study sites, use of NLP tools enabled disease phenotype definitions using data stored in unstructured clinical notes (for example, ophthalmological examinations) and text-based reports [for example, radiology test results and electrocardiogram (ECG) reports]. Sites without NLP tools or experience with limited phenotype definitions included only data available in a structured format, and therefore readily extractable from the EMR.

Across all five study sites, the percent of data captured and stored in a structured format consistently met or exceeded the current “meaningful use” final rule requirements (that is, goals for structured data capture and use defined by the Office of the National Coordinator to promote quality improvement using EMRs), with the notable exception of allergies and smoking status. Height, weight, and race/ethnicity, although satisfying the meaningful use requirements, demonstrated varying capture rates across sites. Only one institution with a vendor-based EMR had any data on family history stored in a structured format. At other sites, family history information was stored only in clinician notes and this information could not be extracted readily even with NLP. To define study phenotypes, no site required data categories with low rates of capture in the EMRs (allergies, family history).

Despite variations in categories and completeness of data capture across sites (Table 3), four of the five study sites achieved PPVs of close to 100% for use of EMR data alone to identify their primary disease phenotype (Table 4). One site achieved a lower PPV of 73% using EMR data to identify cases with dementia. Absolute numbers of cases identified by EMR ranged from 747 to 2950 cases. Of sites with unselected noncohort biorepositories, rates of case identification ranged from 3.6 to 13.4% of the total eligible population. Sites using disease-specific biorepositories had case identification rates of 26.8 to 50.3% of the total population, which, after excluding known controls, represented 71 and 90% of the cases identified through prospective cohort collection. NPVs ranged from 98 to 100% for the three sites generating control cases using electronic algorithms.

Table 3

Data completeness and type by common clinical categories. Meaningful use goal for % data recorded in EMR and type listed for comparison. S, structured data; U, unstructured data; M, mixture of structured and unstructured data; GHC, Group Health Cooperative; MCRF, Marshfield Clinic Research Foundation; Mayo, Mayo Clinic; NU, Northwestern University; VU, Vanderbilt University.

View this table:
Table 4

Performance of algorithms to identify cases and controls from EMRs for five primary phenotypes. GHC, Group Health Cooperative; MCRF, Marshfield Clinic Research Foundation; Mayo, Mayo Clinic; NU, Northwestern University; VU, Vanderbilt University.

View this table:

To assess the additional benefit of NLP, we performed a comparison of the number of cases identified using structured data alone compared with that using both structured data and NLP at one site (Vanderbilt University). At this site, the use of NLP tools identified 129% more cases of QRS duration (2950 versus 1288) than did the use of structured data and string matching only, while maintaining a PPV of 97%.

Discussion

In our study, data captured in EMRs for routine clinical care proved adequate to define five disease phenotypes across five different study sites with robust PPV and NPV. Encouragingly, several recent reports (3032) demonstrate that GWAS based on EMR-derived phenotypes successfully replicated identification of genetic sequences associated with increased disease risk. Although we could achieve high PPVs using case identification algorithms based on data captured through routine clinical care, we note some attrition in the number of cases identified by this approach compared with disease-focused prospective case identification. In our study, electronic algorithms identified 71 and 90% of the possible cases within two prospectively collected disease cohorts. Reduction in case identification rates may be compensated for by the efficiency and scalability of electronic algorithms across EMRs.

Across the five unique EMRs, diagnosis codes, medications, and laboratory tests were readily extracted to identify phenotypes for GWAS. Race/ethnicity, family history, exposure history (for example, smoking), and environmental exposures were documented less frequently across all EMRs and, where present, often were captured in free-text form (for example, clinicians notes) and without consistent or standard nomenclature. Capturing interpreted test results that are typically not recorded as structured data elements (for example, arterial Doppler and ECG data) and clinician diagnoses (such as found on a problem list) generally required NLP. As a result, significant informatics efforts were required to tailor algorithms to each institution’s EMR to accurately identify each phenotype.

Both “home-grown” and commercial EMRs demonstrated high PPV rates across the primary phenotypes. Given the far wider population using commercial EMRs in routine clinical care, this finding suggests potential for broad dissemination of our approach to identify cases and controls for genetic analyses to achieve well-powered studies, although the impact of differences among commercial EMR systems is unclear. Regardless of EMR type, study sites leveraged strengths in EMR data quality and site-specific data extraction methods to optimize phenotyping algorithms, often using data categories with a high proportion of structured data at sites without NLP capacity.

Historically, institutions with significant free-text documentation in their EMRs developed or adapted robust NLP tools to extract data for further analysis (20, 33, 34). NLP enabled sites to improve case finding by searching across a wider range of EMR data categories. The observation that NLP tools allowed identification of 129% more cases than were identified using purely structured data and string matching only emphasizes the value of information captured in free text and is consistent with previous studies (3537). As a consortium, eMERGE identified use of NLP to extract data from text documents as a critical tool to improve data quality for phenotyping. Sites with NLP experience shared best practices with other consortium sites to develop NLP capacity at all sites. However, in our study, even sites without NLP tools successfully identified their primary phenotype, and one site successfully replicated previously identified genotype-phenotype associations for five diseases, including type 2 diabetes (31). Certain phenotype identification algorithms, such as those for type 2 diabetes, were implemented without use of sophisticated NLP; other algorithms, such as those for identifying cardiac conduction problems, were implemented with a combination of NLP and structured data extraction. This variation reflected institutional informatics capacity and a bias toward selection of phenotypes using data captured in structured formats at sites without NLP capacity. Sites without NLP capacity may be limited to identifying phenotypes using only data categories captured in structured fields. Approaches using only structured data could still achieve comparable PPVs, but would have lower case identification rates. However, efficient access to data across the entire spectrum of clinical EMRs can compensate for lower identification rates to identify adequate numbers for genetic studies.

Some data categories consistently reflected low rates of structured data capture (Table 3). The EMRs in this study used Office of Management and Budget categories for race/ethnicity (38). Here, low rates of documentation of race and ethnicity in the EMRs are consistent with previous studies of routine physician practice (39). However, lower rates of race and ethnicity documentation in EMRs may not significantly affect subsequent genetic studies. For genetic studies, ancestry estimates derived from genotype data are often used in primary association analyses rather than self-reported race/ethnicity, although the latter adds important sociocultural information independent of genetic ancestry that may be useful in more refined analyses (40). Similarly, in our study, family history was primarily documented in clinician notes and was not readily extracted even with NLP tools. One site with a vendor-based EMR featured a family history section enabling a mixture of structured and unstructured data capture, but attracted low rates of physician documentation. Our findings are consistent with previous studies, although current efforts are under way to promote standardized collection of key elements of family history within EMRs (4143).

Environmental exposures play a significant role in expression of disease in genetically susceptible populations (4447). Unfortunately, environmental factors, such as exposure to environmental toxins or contaminants, are rarely captured in existing EMRs, with the notable exception of smoking status. Substantial improvements in methods to collect and link environmental data to clinical data in EMRs may enable future studies of the association between disease and environment (48).

In our chart review, we identified a number of common data quality issues. Foremost, the absence of information may not reflect the absence of condition. Depending on the institution, significant care might be rendered at outside institutions and therefore would not appear in the study site’s EMR. To address this limitation, we defined minimum data requirements (for example, two documented clinical visits) to enhance the opportunity for clinical documentation beyond a single visit. We encountered instances of structured results violating acceptable ranges of possibility (for example, a weight of 1000 kg and a height of 15 cm), requiring post-extraction censoring of impossible values. Lack of data equivalency posed challenges in merging data within a single EMR and across EMRs. Often, data are imprecisely labeled such that different measures might be inappropriately mixed together. For example, laboratory tests with similar names (for example, glucose) might represent different tests (for example, blood glucose concentration versus urine glucose concentration). Similarly, diagnostic certainty differed depending on whether the diagnoses were entered in clinical notes or for billing purposes and differed across sites due to local billing practices (49). We identified use of data standards for EMR documentation as a necessary foundation to improve data quality and achieve data equivalence across sites. As a consortium, we used the federally endorsed Consolidated Health Informatics (CHI) standards (LOINC, ICD9/SNOMED, and RxNorm) to promote data equivalency, and facilitate data sharing between sites (5052). Phenotyping algorithms most commonly included diagnosis codes, medications, and laboratory tests, which are well covered by the CHI standards ICD9, RxNorm, and LOINC, respectively.

Our study sites represented academic medical centers or institutions with significant research programs and may have a greater focus on rigorous data collection for potential future research, limiting the generalizability of our findings to non–research-oriented clinical care settings. However, recent national initiatives may promote more complete and standardized data collection across EMR-enabled clinical care settings. Greater adherence to standardized data collection may facilitate the role of EMRs in research and enable the sharing of phenotype definitions across EMR systems. The Centers for Medicare and Medicaid Services and the Office of the National Coordinator have written regulations defining meaningful use of EMRs that promote the recording of structured data and define coding standards for data categories such as diagnoses, laboratory tests, and medications. Clear documentation in EMRs is a necessary goal to achieve meaningful use and enables measurement and improvement in quality of care. Achieving this goal likewise improves the quality and volume of data available for research. Significant financial incentives for achieving meaningful use of an EMR (up to $63,750 per provider over 4 years) may increase the future availability of structured and standardized data from EMRs. Although EMR data may not capture the nuance of the human-human interaction between patient and provider, accurate and structured capture of diagnosis, laboratory test, and medication data, supplemented with text mining tools, has proved useful for identifying disease phenotypes for GWAS within the eMERGE network.

Widespread adoption of EMRs creates the potential for a quantum shift forward in the availability of longitudinal, real-world clinical data for genetics research. Our study suggests that current EMRs used for routine clinical care can be used to identify phenotypes for genetic studies. Future investment in the dissemination, standardization, and comprehensive capture of phenotypic and environmental data in EMRs will help to achieve rapidly scalable phenotyping efforts to match the proliferation of genomics data.

Materials and Methods

Using data from their EMR, each member of the eMERGE consortium selected a primary study phenotype and developed algorithms to identify the phenotype. We characterized EMRs as either internally or commercially developed and quantified the historical extent of data collection and primary methods and tools available to define phenotypes from the EMR (Table 1). We identified the primary consent model, recruitment numbers, and demographics of each biorepository. All sites received approval from their institutional review board for the conduct of this study.

We identified categories of EMR data used to define the five primary phenotypes (Table 2). At four of the five sites, as part of biorepository enrollment, additional data were collected on patients through an enrollment questionnaire (that is, additional data collection outside of the clinical EMR); the fifth site (Vanderbilt University) used an opt-out, de-identified collection model that precluded collection of biorepository-specific information.

For each data category, we generated a measure of data completeness, defined as percent of the cohort with at least one recorded entry within the EMR for each data category. We classified the type of data in each category as structured, unstructured (predominantly free text), or mixed. We defined structured data as numeric data or text data captured and stored in a predefined format as consistent with the current meaningful use definition. Unstructured data refer to data fields (for example, clinical notes) that typically require subsequent processing to be useful for phenotype identification algorithms. To identify a comparable cohort in each EMR, we defined study patients as those enrolled within the site’s biorepository who had at least two in-person visits to the healthcare institution documented within the EMR. For the analyses presented here, study patients were not limited to those with one of the primary phenotypes.

To determine the accuracy of defining phenotypes using EMR data alone, we reviewed 100 clinical charts from the EMR at each site. Three sites used clinician chart review as the standard to confirm the primary phenotype from the records. One site used the clinical gold standard for their primary phenotype. The remaining site used trained EMR chart abstractors to confirm the primary phenotype. We measured the PPV of EMR data to correctly identify cases for the primary phenotype compared with chart review (the standard). For three of the five phenotypes, we measured the NPV of EMR data to correctly identify control cases for the primary phenotype compared with the chart review standard. One of the study sites measured a quantitative trait (QRS duration, a measure of cardiac conduction) precluding measurement of an NPV. For the remaining phenotype—dementia—sufficient research quality control subjects were available from an ongoing prospective cohort study, and there was concern that reliable identification of controls from EMR data would be prohibitively difficult (53, 54).

Footnotes

  • Citation: A. N. Kho, J. A. Pacheco, P. L. Peissig, L. Rasmussen, K. M. Newton, N. Weston, P. K. Crane, J. Pathak, C. G. Chute, S. J. Bielinski, I. J. Kullo, R. Li, T. A. Manolio, R. L. Chisholm, J. C. Denny, Electronic Medical Records for Genetic Research: Results of the eMERGE Consortium. Sci. Transl. Med. 3, 79re1 (2011).

References and Notes

  1. Funding: The eMERGE Network was initiated and funded by the National Human Genome Research Institute, with additional funding from the National Institute of General Medical Sciences through grants U01-HG-004610 (Group Health Cooperative), U01-HG-004608 (Marshfield Clinic), U01-HG-04599 (Mayo Clinic), U01HG004609 (Northwestern University), and U01-HG-04603 (Vanderbilt University, also serving as the Coordinating Center), and the State of Washington Life Sciences Discovery Fund award to the Northwest Institute of Medical Genetics. The Vanderbilt BioVU and the Synthetic Derivative were supported in part by Clinical and Translational Research Award grant 1 UL1 RR024975 from the National Center for Research Resources, NIH. Funding for the Northwestern Enterprise Data Warehouse (EDW) was supported in part by Clinical and Translational Research grant UL1RR025741 from the National Center for Research Resources, NIH. Author contributions: All authors participated in the design and interpretation of the experiments and results. A.N.K., J.A.P., P.L.P., L.R., K.M.N., N.W., P.K.C., J.P., C.G.C., S.J.B., R.L.C., and J.C.D. participated in the acquisition and analysis of data. A.N.K., J.A.P., P.L.P., C.G.C., K.M.N., N.W., I.J.K., J.C.D., and P.K.C. performed statistical analysis. A.N.K., P.L.P., K.M.N., J.C.D., and C.G.C. led data collection and validation from each participating site. All authors contributed toward writing and editing the manuscript. Competing interests: The authors declare that they have no competing interests.
View Abstract

Navigate This Article