Advancing Interoperability: Data Quality Challenges in the Automation of Medication Reconciliation

Best practice medication reconciliation compares medication information from at least 2 different sources to determine what medications a patient is actually taking vs what they should be taking. It is well established that medication reconciliation is critical to successful transitions of care and has been demonstrated to reduce readmissions and improve outcomes.  However, automating this process proves challenging as the sources of medication data come from different stakeholders and systems.

In order to drive efficiencies in the medication reconciliation process, it is critical to leverage all available data interoperability opportunities, as the gathering and manual entry of data into a solution can be the most labor-intensive part of the process.  The automated ingestion of a variety of data sources can be used to pre-populate a medication reconciliation solution with as much data as possible, streamline workflow, enable automated processes, facilitate risk review and  support data analytics.  With increased regulatory pressure and industry support for interoperability initiatives, , the growing  adoption of data standards, and  significant increase in the availability of APIs and interface programs  have improved access to data.  However, the data ingested by any smart system needs to be high quality and consistent, or attempts to automate workflow processes or leverage that data for comprehensive analytics will quickly breakdown.

Specific challenges to automation and analysis directly related to data quality and completeness include:

  • Inconsistencies or gaps in patient identification or matching  criteria in the process of populating a record
  • failed correlation or lookup of clinical data to patient records, 
  • gaps in matching or harmonizing medication data from different sources 
  • insufficient data resolution to support comprehensive evaluation and risk assessment.

Even a resilient system with the ability to ingest data across different schemas, and the with the flexibility to support a variety of user workflows,  will require an in-depth assessment and testing of  data quality at the time of implementation of any new data set. 

Consequences of poor data quality include but are not limited to:

  • Splitting of records 
    • Example: The same patient may have multiple distinct records due to inconsistencies in upstream data entry or changes in patients program eligibility over time, that need to be merged to prevent data silos,
  • Missing key attributes
    • Example: A recent implementation that included ingesting the C-CDA output of a popular EHR revealed the C-CDA available from this EHR included descriptive medication information in textual form, but included no coded values such as NDC, RxNorm, or GPI.
  • Stranding of data in external sources
    • When data from an external source is pulled via a lookup query, the number of key reference attributes required for a minimum patient match to that source may strand data. For example, for a lookup requiring zipcode as a patient match attribute, data may be stranded for patients with unstable locations, a characteristic frequently found in high risk populations.
  • Versioning of data and standards
    • Old data or use of old standards may lead to failed record matches with a most-current reference database. For example, obsolete NDCs in a medication record may be unmatchable to a reference source, stranding the medication from automated processing.

While algorithmic approaches can be devised to address many of the challenges described here, algorithmic approaches are not always complete or possible. In these instances, a mechanism for human workaround can be devised to bridge “last mile” data issues and provide a comprehensive data landscape to support clinical workflows. 

Anne Marie Biernacki, Co-founder and Chief Technology Officer, ActualMeds Corporation.  @AnneMBiernacki , https://www.linkedin.com/in/annemariebiernacki/