Blog

Preparing for the future of connected health: The case for interoperability 

January 27, 2021 Bill Betten

No video selected

Select a video type in the sidebar.

Preparing for the future of connected health: The case for interoperability 

Since its arrival in early 2020, COVID-19 has dramatically impacted virtually every aspect of life worldwide. One area in particular that rose to solve pandemic-driven challenges was the increased use of connected devices, as telemedicine services were rapidly put in place to replace and supplement in-person visits and minimize infection risk.

However, for connected health to continue to expand beyond Zoom-style calls with medical professionals and fully realize its promise, fundamental challenges regarding the integration and interoperability of various medical systems must be addressed.

 

Disjointed data is slowing down progress

The amount of patient health data created every year is staggering, and it continues to grow exponentially. Current estimates suggest a single patient generates 80 megabytes of data a year; that’s approximately 608 petabytes in total globally. 

More data is not a bad thing: the increased use of data has a range of benefits for all stakeholders in the industry. But at present, individual patients – and the data they produce – are rarely contained within a single health system. Data is often split across multiple sources, systems, and constituencies. I have experience of this myself, having recently been asked to carry imaging disks between two different sites when undergoing brain surgery. 

At best, this is an inconvenience. At worst, it could see patients receive delayed or even inappropriate care: co-morbidities and potential drug interactions may not be readily apparent if the data on hand in a given health system is limited.

However severe the impact, this disjointed data is a significant barrier to successful connected health. Not only does inadequate data interchange make it slow and complicated to move information, but working from partial data sets makes it difficult to conduct rigorous, holistic analysis, and makes it harder to identify new solutions.

Taking steps to rectify this issue and integrate disparate data sets is a crucial part of preparing for the future of connected health.

 

The case for interoperability 

Given the way that healthcare systems segregate medicine and medical practitioners into different categories and specializations, it’s not surprising that silos naturally develop. But since our bodies are a system with many interconnected components, we can’t allow patient care to fall into niches and obscure the greater picture.

For truly connected healthcare to succeed, the industry needs to establish interoperability across the entire length and breadth of the ‘connected health continuum’. That means starting with devices at the bottom, reaching up to the cloud level, and ultimately providing integrated information to the various constituencies at the top.

 

Device-level

At the device level, interoperability needs to follow a hierarchy of communication: that is, not every device will need every possible feature, but all need to be able to share information at the right level for their purposes. Depending on the case, that might be a one-way transmission of information to a hub device – with a connected pulse oximeter, say – or it might mean sharing information back and forth. 

Whatever the individual situation, integrating information from different devices allows us to gain a more holistic view of a patient and their condition. It also provides the context associated with the collection of that data, giving us more evidence to enable treatment of illness more comprehensively, while providing better prediction and prevention of illness in the future.

 

Cloud-level

The next stage of interoperability is when information from individual devices enters the cloud. Relevant data streams must be brought together in the cloud, and they must be processed, normalized, and combined so meaningful conclusions can be extracted. To reference IBM’s Four Vs of Big Data: understanding the volume, variety, velocity, and veracity of the data at our disposal is key to drawing useful and accurate insights. 

Exactly how much data is shared will vary – it could be bytes, or it could be tens of gigabytes – so it’s important that the right infrastructure is in place to manage the appropriate amount of data. It’s worth noting that more data doesn’t necessarily mean better results. No device manufacturer needs all your ECG data for an entire year, for example: they only want what’s necessary to identify the need for intervention.

This is where computational power may help. Tools like AI can identify what’s truly important within a given data set. The ability to work with huge data sets when necessary also provides a far greater chance of identifying trends which may lead to better treatments for patients, or predict and prevent future illnesses. 

For example, the Wyss Center’s implantable brain monitoring device, Epios, tracks and analyzes long-term monitoring data to accurately forecast the risk of seizures in epilepsy sufferers, alerting patients so they can plan their lives according to the likelihood of having a seizure. 

To make such extensive analysis possible, however, we need a mutually intelligible way of storing and sharing data to facilitate better communication between different doctors, hospitals, and specialties.

This is even more important given the rise of remote monitoring in healthcare. A desire for convenience and a need for more virtually-facilitated care as a result of the coronavirus pandemic are changing the way patients want to be treated. Whether it’s simple tools that identify irregular heartbeats, continuous glucose monitors for diabetics, or tools that manage home dialysis treatment, patients are far more comfortable when monitored and treated at home.

And with a lack of doctors and medical practitioners looming – especially in rural areas of the US – better, data-powered remote monitoring could be a lifeline for many. Cloud-level interoperability would mean data could follow patients wherever they receive care, enabling seamless healthcare provision and providing more comprehensive information for clinicians.

 

Constituency-level

In today’s environment, however, we’re still a long way from having the ability to seamlessly interconnect our information and data paths.

The ultimate end goal would be to create devices that interface into one central system, with all stakeholders having access to medical information in one place. The UK NHS Patient Access system is a partial solution, enabling patients and clinicians to access health data anywhere, on any device (although there is, as yet, no direct link with medical devices).

Such a system would simplify the healthcare journey for patients, make collaboration easier for medical staff, and enable medical device companies to build something that works for everyone. Devices could be sold to multiple constituencies, those constituencies would have a broader pool of insights to work from, and patients would be able to manage their existing conditions more easily.

Almost ten years ago, however, a staggering $38bn was spent to introduce electronic healthcare records to all hospitals and care facilities as part of the Affordable Care Act. While ambitious, the project wasn’t broad enough in scope and provided little incentive for other constituencies to cooperate in sharing data. It also failed to meet clinician needs, leading to increased workload, spiraling costs, and reduced efficiency.

It’s still true that creating successful interoperability requires finding a common language between all relevant stakeholders. But this needs to be done in a way that is efficient, beneficial, and financially viable for all parties involved – not the least the payors – to ensure it is effective and provides a suitable incentive to participate. 

 

The future of interoperability

True interoperability hasn’t reached the medical device space yet. While the use of commercially available communication technologies such as Bluetooth, WiFi, and cellular have enabled devices to connect on one level, on another level, they may as well be speaking different languages.

But the unprecedented levels of cooperation across the entire industry during the pandemic – the sharing of ventilator specs, real-time experience, and outcome data; or vaccine collaboration, for example – gives hope to the thought that interoperability is, in fact, possible beyond times of crisis. 

We only need to take small steps to make additional progress in that direction. Setting communication protocol standards should be an early priority. Medical device companies and healthcare companies also need to start designing with interoperability in mind. They should also be very clear on their goals: where information gathered will go, what it will be used for, and how it will benefit all those involved.

And finally, while technology is key to achieving interoperability, transforming infrastructure and  changing mindsets around the path of value in the industry will have a huge role to play, too. While the pandemic spurred a dramatic increase in the use of telemedicine, primarily as a replacement for in-person visits, in the US the real enabler was the change in the reimbursement rules to pay for such visits. If we as an industry can address the technical, commercial, and infrastructural barriers, then connected devices, and truly connected healthcare will become a reality. 

 

Bill Betten

Director of Solutions

S3 Connected Health

 

mark-wehde-mayo-clinic-interview-S3CH

 
For more information on the future of connected devices, read our guest blog with Mark Wehde, Section Head of Technology Development and Assistant Professor of Biomedical Engineering at Mayo Clinic.