It was Leslie Lamport quoted as saying: A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.
This truism has been highlighted over the last few years through often unforeseen incidents and consequences of failures in highly complex, loosely integrated systems, especially when running feeder systems through often distant, unrelated and often multiple applications. While the advantages of federated systems are substantial they are sadly often highly susceptible to failures in any one aspect of the network.
To manage such chaos, highly resilient systems and redundancy are essential, as the loss of both intended and unintended connections between systems can have serious impacts on patient care, resource allocation, workforce, data quality and collection as well as the inevitable political and media stressors. Even external agents can create chaos, as Sony is now so clearly aware. Health is just as, if not more, susceptible to attacks.
The solution is not simply good design and architecture (essential) but also expensive mirroring, security, cloud use, redundant storage systems and well planned paper backup solutions. Achieving 100%, 24×7 system availability should be the holy grail of ehealth systems but it is very, very expensive. Many non-clinical architects, analysts and developers downplay the need for such costs or support systems but as we find greater and greater integration into the day to day clinical management of patients these costs must be seriously considered and only rejected or watered down following thoughtful consideration of the tragic impacts that may follow.
As massive gains to clinical safety, effectiveness and efficiency are achieved by ehealth, these marginal gains start to decrease relative to the growing insidious and potential harm of major system failures.
Comedian Bill Murray said “Don’t think about your errors or failures; otherwise, you’ll never do a thing,” and while there is modicum of wisdom in his thoughts, it is critical that we understand and learn from our errors and failures when another’s health is at risk. Sadly if one looks at the last 20 years in ehealth we do learn but ever so slowly and only after the repetition of the same outcomes over and over – but fortunately we do learn!
As we move rather rapidly down the integration pathway, we should keep asking our clinicians to consider the impacts of sudden or silent failure and ensure solid, well practiced back up processes with auditing are in place and not left to the rushed search for a notebook and pen when the inevitable happens.
Isaac Asimov understood this when he said: “It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.”
Our work continues!