Interoperability: Failure To Launch
by Walter Sujansky, Monday, February 2, 2015
On April 27, 2004, President George W. Bush proclaimed a bold goal for the nation: "Within the next 10 years, electronic health records will ensure that complete health care information is available for most Americans at the time and place of care, no matter where it originates. ... These electronic health records will be designed to share information privately and securely among and between health care providers when authorized by the patient."
More than 10 years later, most observers within the U.S. health care system and the health IT industry would agree that this goal of interoperability has not yet been achieved, and may not even be within clear sight. Although pockets of interoperability exist, the EHR systems used by inpatient, outpatient and ancillary providers generally cannot exchange patient data electronically, sometimes even among systems developed by the same EHR vendors. Although myriad interoperability standards exist on paper, real world connectivity between individual EHR systems still requires extensive custom interface development, attended by large and often prohibitive costs.
Last week, the Office of the National Coordinator for Health IT released a new draft 10-year nationwide interoperability roadmap, in which it aims to achieve basic electronic health data interoperability by 2017 and a full array of interoperable health IT products and services by 2024. Will we reach these objectives?
On May 25, 1961, President John F. Kennedy announced another ambitious technological undertaking for the U.S.: "I believe that this nation should commit itself to the goal, before this decade is out, of landing a man on the moon and returning him safely to earth." At the time, the U.S. had not even launched a man into earth orbit, but within eight years and two months, two Americans set foot on the moon and returned home safely. How had the U.S. achieved this audacious technological goal in less than a decade?
Although space travel and EHR interoperability are clearly very different endeavors, both are highly complex technical undertakings that require the coordination and collaboration of a great many people and parts. Is there anything we can learn from the successful 1960s Apollo program to help guide us in our efforts towards EHR interoperability today? Based on a detailed historical account of the Apollo program, I believe the answer is yes.
Clarity of Purpose
The Apollo program was blessed with a very specific goal: "Man, moon, decade." Although the task was fiendishly complex, its goal, criteria for success and time frame were straightforward. All design, prioritization and planning decisions could be judged against this goal, and any non-essential elements discarded. That fact allowed the project teams to actually do everything as simply as possible, reducing the number of ways things could go wrong while still adhering to a well-understood objective.
Health IT's quest for interoperability suffers from a notable absence of such clarity. On the one hand, too many interoperability use cases have been defined, ranging from lab-result reporting to transitions of care to quality measurement to research, and more. On the other hand, few of the use cases have included precise criteria for success against which technical design decisions can be weighed and later evaluated. The results have been technical standards that, in general, are excessively complex but also insufficiently constrained to achieve the level of interoperability that is widely (but vaguely) envisioned. Fewer and more precisely defined interoperability goals are needed to guide and focus our efforts.
Strong Central Technical Leadership
The development of the machines and processes needed for lunar missions entailed dozens of independent organizations (including many private-sector contractors), hundreds of engineers and thousands of pages of technical designs. However, a strong central core of managers, all very technically knowledgeable, made the key design decisions.
For example, initial designs called for a single "command module" craft to travel to the moon, descend to its surface, and then bring the astronauts home. It soon became apparent that a better approach was to have a separate "lunar module" spacecraft that accompanied the command module, separated from it to bring two astronauts to the moon's surface and back, and then remained in lunar orbit as the command module returned to earth.
At first, the large contractor who developed the command module objected to this idea, knowing that a separate company would likely be hired to build the very different lunar module. These objections were quickly and convincingly overcome by the program's managers, however, which allowed the more effective two-spacecraft design to proceed. On other occasions, NASA's own brilliant young engineers lobbied for more intricate or advanced designs that they had proudly conceived, but were overruled by technical managers laser-focused on the simplest design needed to achieve the objective. It was not a democracy.
The process for developing health IT interoperability standards is very different. Typically, communities of all interested parties are convened to design the standard specification through a consensus process. This collective design process is guided by a technically knowledgeable but minimally empowered moderator.
The breadth of perspective in these communities is valuable, and many very knowledgeable professionals with the best of intentions participate in them. However, the process is also subject to the whims of a less informed, less focused or more biased minority of participants.
Combined with an absence of clear objectives, this dynamic can lead, in the worst case, to a very persistent or vocal minority dictating a sub-optimal design. In the best case, the process suffers from much unproductive discussion and inefficiency.
A development process governed by more empowered technical leadership would likely produce more effective interoperability standards with less effort. Whether such leadership is drawn from the ranks of industry, government or academia is less important than that it be impartial, highly competent, experienced in system implementation and singularly focused.
Extensive Real-World Testing
The most important lesson of the Apollo program is the degree of real-world testing and iterative design required to engineer complex systems. At every critical step of development, the program took time to conduct extensive field testing, with meticulous observations and documentation of results, followed by thorough analysis and remediation of failure points.
Although all the spacecraft's individual components were brilliantly engineered and carefully evaluated while still on the ground, it was not until the components were fully assembled and put to their intended uses that certain critical shortcomings appeared. For example, an early unmanned launch of the giant new Saturn V rocket experienced a previously unanticipated engine failure late in its ascent because a design flaw in the fuel line did not manifest itself until the machinery was operating in the vacuum of space. Many other flaws, large and small, were also detected, diagnosed and addressed because of the carefully monitored real-world test missions that the process included.
The development of interoperability standards for health IT rarely involves such rigorous real-world testing. Pilot testing and trial use of newly developed standards do occur, but these phases are typically informal or hurried exercises that lack systematic planning, as well as careful observation and documentation of results. Most importantly, they lack rigorous analysis of failures and subsequent revisions to and retesting of the standards before they are finalized (or enshrined in formal certification criteria).
The "Connectathons" conducted by Integrating the Healthcare Enterprise are very well conceived and come closest to the testing rigor employed by the Apollo program. However, even these events take place in a controlled laboratory environment only. They demonstrate that systems conforming to a standard can be made to interoperate (sometimes after considerable ad hoc programming), rather than evaluating whether conformant systems can readily interoperate in the real world and, when they cannot, feeding this information back into a self-critical and iterative standards-development process.
As with the Apollo program, one of the most effective uses of government resources would be to fund the implementation of candidate interoperability standards in production settings by a variety of vendors and the independent assessment of these standards' performance and shortcomings, followed by iterative modifications and re-testing before the standards are announced as complete.
Reasons for real-world failure of interoperability standards include not only patent errors in design, but also under-specification (such that fully compliant systems still cannot interoperate) or excessive complexity, inadequate documentation, and/or incomplete testing resources (such that fully compliant systems are very difficult to build). Laboratory testing alone, like ground testing of spacecraft, cannot detect all of these shortcomings.
Leveraging Lessons Learned
One might reasonably argue that the space program received much greater government funding than health IT, and this enabled the clear direction, strong technical leadership and rigorous testing that (among other things) enabled the moon shot to succeed. Indeed, between the announcement of the goal in 1961 and the first moon landing in 1969, the federal government spent $88 billion on the Apollo program (in 2009 dollars). This is well in excess of the roughly $30 billion allocated by the 2009 HITECH Act for the development and widespread adoption of interoperable EHRs.
However, given that the Apollo program entailed the physical construction of massive rockets, intricate space craft, world-wide tracking stations and giant launch facilities and control centers, the financial demands of achieving interoperability among health care software applications are perhaps more modest.
If we can adjust our basic approach based on practices from another complex but ultimately successful technological challenge, one remains optimistic that health IT interoperability is also within our grasp.
Source: iHealthBeat, Monday, February 2, 2015
The Usability People work with you on improving the Usability of Healthcare IT.
For expert 2015 ONC Safety-enhanced Design (aka Usability) evaluation of your EHR: contact The Usability People
Together we may save a life! #SafeHealthIT