Can Data Provide the Trust we Need in Health Care?
Collecting actionable data is a challenge for today's data tools
One of the problems dragging down the US health care system is that nobody trusts one another. Most of us, as individuals, place faith in our personal health care providers, which may or may not be warranted. But on a larger scale we’re all suspicious of each other:
- Doctors don’t trust patients, who aren’t forthcoming with all the bad habits they indulge in and often fail to follow the most basic instructions, such as to take their medications.
- The payers–which include insurers, many government agencies, and increasingly the whole patient population as our deductibles and other out-of-pocket expenses ascend–don’t trust the doctors, who waste an estimated 20% or more of all health expenditures, including some thirty or more billion dollars of fraud each year.
- The public distrusts the pharmaceutical companies (although we still follow their advice on advertisements and ask our doctors for the latest pill) and is starting to distrust clinical researchers as we hear about conflicts of interest and difficulties replicating results.
- Nobody trusts the federal government, which pursues two (contradictory) goals of lowering health care costs and stimulating employment.
Yet everyone has beneficent goals and good ideas for improving health care. Doctors want to feel effective, patients want to stay well (even if that desire doesn’t always translate into action), the Department of Health and Human Services champions very lofty goals for data exchange and quality improvement, clinical researchers put their work above family and comfort, and even private insurance companies are trying moving to “fee for value” programs that ensure coordinated patient care.
What can we do to stop pulling in different directions and put our best ideas into practice? Data is often the impetus to trust. If we collect data on the most important activities in health and use it wisely, we may–perhaps–be able to set up a system in which everyone can place their trust.
So let’s look at four key areas of health care reform–”fee for value” cost containment, patient engagement, clinical research, and quality improvements–to see how data can interact with new ways of working to fix the problem of trust.
“Fee for value” cost containment
Software companies have learned not to pay programmers by the amount of code they write, and corporations are learning not to pay lawyers by billable hours. Medicare and private insurers are trying hard to move similarly from paying doctors for the number of procedures performed to paying them to actually cure the patient.
The key to paying doctors fairly is risk stratification, which places each patient in a stratum based on how hard he or she is to cure. If I have high blood pressure, it makes my heart disease harder to cure, and if I have high blood pressure along with diabetes and obesity, it makes the job even harder. Fee-for-value pays doctors a different amount if the patient has contributing problems (appealingly called “comorbidities”), and thus forces them to consider all the factors instead of just treating one condition in isolation.
But how much should each patient cost? Here is where data becomes critical. We need to know how much care was needed by a large set of patients who suffer from high blood pressure, diabetes, obesity, and heart disease. Throw in tobacco use and other comorbidities and you see how complicated risk stratification is.
To get straight to the point: we can’t figure all that out now. We just don’t have the data. To do risk stratification right:
- All the information on these patients has to coded properly in electronic records. Each of the relevant fields has to be properly entered, and changes such as relapses have to be noted. But most electronic records are little more structured than paper (we’ll see an exception later in this article). The records don’t allow unambiguous coding, and if they do, the staff are often untrained to do it right, or too busy.
- Records must be collected from large numbers of patients in each stratum in order to gather statistically significant samples. This requires spanning multiple providers. But the providers rarely share data, and when they do, they usually just send Coordination of Care summaries that lack the detail needed to do stratification.
- Clinical and financial information need to be looked at together.
- Analysis must find the actual costs of treatment. But until we get the data, we can’t refine the analytics to develop trustworthy estimates of costs. It’s a highly complex art, requiring software to
- Recognize “episodes of care” that combine many interventions by many clinicians to solve a single problem such as an infection or heart attack.
- Look for patterns such as readmissions and the interactions among comorbidities.
- Follow longitudinal changes in patients over the years.
I think the institutions driving fee-for-value (Centers for Medicare & Medicaid Services, and private insurers such as Blue Cross Blue Shield of Massachusetts) have to bite the bullet and accept that we are not ready for risk stratification on a scale that will put fee-for-value on a valid foundation. When we factor in health provider quality–which I’ll cover later–the hill becomes even harder to climb.
Before I look for solutions to this dilemma, I’ll turn to the other issues of trust.
Patient engagement
None of us ever wants to go to the doctor–almost everyone would rather take care of problems ourselves. Now huge investments are going into the realization of this ideal, by giving patients tools to manage their lives more healthily.
This goal has come within reach thanks to advances in electronics and software, allowing small, cheap, robust consumers devices to measure the vital signs for which we used to need expensive equipment manned by experts. And our doctors can stay in the loop by tuning in to the devices and keeping track of us. If a patient is gaining or losing weight, if her mood takes a turn for the worse, or if she just stops taking her medication–technology can notify doctors even of that–the health provider can react right away. No need to wait three months for the patient to come in for an appointment and try to remember what she did during that whole time.
In short, device data is precise (if the device works right), verifiable, and tied to a particular place and time. As explained by developer Shahid Shah (see slides 3 through 6 of this presentation), devices are the best sources of data for a scientific approach to health care. In short, they can lead a doctor to trust data collected in the field.
If the health care provider works in sympathy with the patient and bases effective, encouraging interventions on the data (which could be as simple as a phone call from a nurse when the patient is slipping), devices can also cause patient trust for the provider to blossom.
Reliable device data may also support pay-for-value reimbursements and even cut fraud. Imagine if a doctor must back up diagnoses with information from devices (as doctors do now with test results), and show improvement in the patient through results reported by devices over the months of treatment.
OK, enough boosterism–where do the flies reside in the ointment? Turns out there are plenty:
- Rachel Kalmar, who spoke at an O’Reilly Strata Rx conference, reports that devices are unreliable. She tests multiple fitness devices by wearing them simultaneously, and has found big differences in what they report. The relative change on a single device is meaningful, but the absolute value is not to be trusted. Medical devices that are marketed to diagnose or treat disease are another story–these are regulated by the FDA, along with devices that intermediate between the instruments and electronic health records. Industry fears that the FDA will extend regulation to other devices and software have led to various attempts in Congress to rein the FDA in. But we need some mechanism to provide trust in all our devices. Wristbands that measure your pulse or oxygen use don’t have the heightened safety concerns of X-Ray machines and other regulated devices, but an independent rating system would enable clinics and hospital systems to justify their use, and insurers to pay for them.
- To produce data we trust, devices should be trustworthy, in the sense that they’re resistant to intrusion or external manipulation. Medical devices–even those responsible for keeping us alive, such as cardioverter-defibrillators–are insecure. They share this vulnerability with many other sensors in cars, electrical meters, and other elements of the Internet of Things. In addition to malicious intruders, we may have to start worrying about patients or doctors altering devices to produce output that promotes some agenda. This unfortunately can become a temptation to individuals if they enter employee reward programs for healthy activity, and to doctors if they need to present information from devices to get reimbursements.It is probably time to add encryption and authentication to devices, but these require more processing power, take up time during calculations, and increase the size of data transfers. This in turn can raise the cost of devices, add complexity (a source of error in itself), soak up more power, and put more of a burden on the network. So securing the Internet of Things, including the Internet of our bodies, presents problems.
- We’re not done. In addition to trusting devices, we need to integrate their data. FDA-regulated devices have trustworthy results, but neither these devices nor the consumer equipment is good at communicating those results. The data is just displayed on a screen, or perhaps uploaded to a vendor site. Patients can’t even get the data that pacemakers collect. In both the home and the hospital, devices occupy tiny worlds all their own, emitting chirps and beeps but little actionable data. It’s as if the manufacturers got carried way with their technology and produced gadgets rather than meaningful health.A successful device strategy must retrieve data from devices in standard units, in a standard digital format, and store it in a central record for each patient. Then the analysis can begin.
Modern Approach
A modern approach to data capture and analysis can be found in Capsule, a company that makes equipment for monitoring and using data from devices. Recognizing the importance of analysis, they are evolving into a cloud-based software company as well. I had the opportunity to talk to Stuart Long, Chief Marketing & Sales Officer.
Their current systems work in hospital wards that may have tens of thousands of devices monitoring their patients. Just determining when to turn these on and off, check their batteries, and determine how often each should report an update, is daunting, let alone deriving life-saving information from the collective knowledge represented by these devices.
Capsule has recently released an “active monitoring device” in FDA terminology. Its first instantiation, over four years ago, sat on top of spot check monitors, which report information collected by the devices connected to poles and bedside equipment. Eventually, Capsule learned to connect directly to the sensors themselves, taking the other monitors out of the picture and collapsing a variety of technology into a unified mobile platform as part of their Medical Device Information System. Their electronics are designed to enable quick replacements in case of failures, without having to open a device. They also hope their platform will let new sensor technologies flourish and learn to work together.
Their product, the SmartLinx Medical Device Information System, acts as a layer between the devices and the electronic health records, interpreting the information from the device and translating it into a format the EHR can understand.
But collecting data is just the start. Capsule began to incorporate analytics into their produce, which now includes machine learning, natural language processing, and semantic analysis. These tools find critical information in the combined data from devices and in any patient data residing anywhere in the patients’ medical record, including all structured, unstructured, and real-time physiologic data.
The leading clinical use–and the first in a planned series of clinical surveillance apps–for their SmartLinx platform is the detection of sepsis, which can develop and kill a patient within hours. Clinical Vigilance for Sepsis (powered by Amara Health Analytics) tracks current and previous vital statistics in real-time, and can warn the clinician of sepsis before it becomes life-threatening. Current published data demonstratess a sensitivity greater than 99%. Next on tap for clinical surveillance applications will be further analysis once sepsis is ruled. An all conditions patient deterioration solution can provide clinical decision support to help the physician determine what the real problem is.
Finally, after sepsis is treated, SmartLinx Clinical Vigilance continues to monitor progress and can alert if sepsis progresses to a more critical acuity, like severe sepsis or septic shock when a drug is not working. Given the high rate of antibiotic-resistant infections in hospitals, this is particularly valuable. Capsule calls the combination of their systems–which connects and integrates all medical devices, actively monitors patients, and provides surveillance for real-time point of care patient deterioration–healthcare’s first “medical device information system.”
Capsule is now planning to expand into long-term care facilities such as nursing homes and rehab centers–which are the major source of hospital readmissions, creating billions of dollars in costs–and even into the home. Hospitals and long-term care facilities may be able to pay for SmartLinx machines (which are priced fairly low at $1,600) by removing the devices they render obsolete. Long said that Capsule estimated a ten-fold ROI on the machine.
But patients are unlikely to pay for such equipment in the home, and Capsule does not rely on either hospitals or insurers to cover the cost. That’s where their move to software comes in. Much of the data they need can be collected on consumer devices.
Because the vendors usually collect the data and provide APIs for access to it, Capsule can hook up with the vendor site and take in data on the cheap. It can then report situations calling for intervention to providers through HIPAA-compliant secure messaging.
The Capsule approach can be the foundation for analytics, and ultimately for both new research and for action to improve the quality of care.
Clinical research
The microscope is a world-changing invention (it led to the germ theory of disease, for instance) but it shows only a tiny sliver of the world at one time. That’s how clinical research is today in general. Researchers collect data from a few dozen patients and try to draw conclusions that can apply to everybody suffering from a condition.
Researchers need to enhance the microscopic view with more of a surveyor’s grasp of the landscape. The key is data sharing, of which many experiments are underway. Researchers have a lot of work to do to get the benefits of big data:
- Choosing data formats that facilitate sharing, as well as standards for collecting data
- Exchanging data (which can reach gigabytes in size) as well as the computer programs that process it
- Documenting the provenance of the data, such as who collected it and when
- Preserving patient privacy
- Working out attribution and rewards for sharing, cleaning up, and handling the data–activities that go totally unrecognized in the scholarly journals that are academics’ meal tickets
We also need the results of all clinical trials, including those that failed. With well-curated data and results of clinical trials, the medical field can speed up the rate of discovery. Replication also becomes easier, and we can start to trust what researchers tell us again.
Quality improvements
Diagnosing the decline of a patient in the hospital is fine analytics, but most health care takes place over a longer time period. Trends can be teased out of data taken from massive groups of patients over many years. So when can we start?
Patients can now view Medicare ratings of health providers. But we just don’t have enough data to judge how well doctors do, and pay them accordingly.
Modernizing Medicine is one of the companies in the forefront of making sense out of medical records. WIRED reports that Practice Fusion is taking on a similar task. Both are Software-as-a-Service (SaaS) companies who offer EHR services with lots of analytics layered on. I’ll concentrate on Modernizing Medicine here, because I talked recently to its CEO Daniel Cane and CMO Michael Sherling, MD.
The first challenge they address is collecting data in a structured fashion. Structured data, the rescue for analytics in many industries, has turned into a term of opprobrium, if not a swear word, among doctors. Meaningful Use, directed at collecting statistics that are critical for quality improvement, has only added to the burden of data entry. But Cane and Sherling claim to have solved this problem in their product, Electronic Medical Assistant (EMA).
The reason most doctors hate structured data is two-fold: the interfaces for entering it (pull-down menus, for instance) are clunky, and the same information must be added repeatedly to satisfy different reporting requirements from different organizations, such as Meaningful Use.
But done properly, structured data entry can be quick and intuitive. Part of the solution lies in exploiting the advances in current devices (EMA runs on iPads) and software to minimize typing. EMA can also run in a web browser on a desktop system.
But a big part of the key is system intelligence: knowing what’s in the database. For instance, when the EHR knows the possible values for a field, it can do predictive analytics, as EMA does, to propose completions for incompletely entered data. Cane reports that doctors enter data much faster into EMA than paper, the reverse of most EHRs. Furthermore, being aware of all reporting requirements, EMA eliminates the double entry of data and fills in fields automatically.
Having captured all the clinical data in useful formats, EMA can really get going. It takes deidentified patient data from many providers and looks for patterns, as done by other sites ranging from the Practice Fusion and Athenahealth EHR services to the PatientsLikeMe patient alliance. Cane pointed out that, because they concentrate on half a dozen disciplines (dermatology, ophthalmology, etc.) Modernizing Medicine has access to unprecedented numbers of patients in narrowly-defined categories, and therefore become a treasure for researching orphan diseases (rare conditions) in these categories. For that reason, they are popular with pharmaceutical companies on the look-out for participants in clinical trials.
The disciplines where Cane and Sherling have chosen to offer their product tend to treat well-defined diseases without many complications arising outside the discipline. In other disciplines, many patients demonstrate more intricate combinations of problems such as diabetes, heart disease, and lung disease. So the Modernizing Medicine approach is one stage in an evolution of analytics that has to learn to cross the borders between disciplines.
Cane said EMA exchanges data with other major EMRs in order to facilitate this next stage. But the health care system will take years to master the complexity of human illness.
Privacy is going to be a major concern as data gets shared. We need to stay ahead of the data miners who use sophisticated tools to re-identify patients. As much as we all go for open data, the health care field will need contractual agreements and the vetting of organizations that use data to protect patients.
What, then, do I think the Department of Health and Human Services should do? I want them to take a look at the foundations of quality improvement and be realistic about what needs to be in place to make it happen.
On the positive side, they are laying the groundwork for data analysis by placing some tough demands on EHRs and their users. According to the current Stage 2 of Meaningful Use, they must support meaningful standards, such as the CCD-A for data exchange and the Direct protocol, and must prove that patients are actually downloading their records. The upcoming Stage 3 may (and should, in my opinion) add the ability to accept data from patients, such as the results from the devices discussed earlier. But these requirements may not be met.
The wealth of data captured by Capsule and Modernizing Medicine go far beyond what Meaningful Use requires. And EHRs designed years ago as data silos will not easily convert to open systems. So far, certification has not produced systems that meet the needs of doctors, and we don’t yet know whether they’ll meet public health needs.
Dr. David Blumenthal, who served as the National Coordinator when his department began rolling out the Meaningful Use requirements, explicitly decided to push for the adoption of electronic records first, and require their interoperability later. This was expedient because existing records could be upgraded to meet Stage 1 of Meaningful Use in the fast-paced time frame set by law for Meaningful Use adoption. But it may turn into a pact with the devil for which we pay dearly.
HHS is also forcing providers to adopt a more specific standard for classifying illness, ICD-10, which will reduce some of the muddle in current health data.
But we still haven’t solved the problem of making sure we’re getting good data. Errors are rife in patient records. Combined with lack of structure, the dirtiness of patient data will postpone quality assessments, which in turn are needed to make fee-for-value meaningful.
Luckily, analytics have something to offer here. Statisticians usually have to handle data that is less than ideal, and are used to throwing out suspect items and compensating for inexact values. If we can just get the data–messy as it may be–we can start to find real patterns.
So we need data in order to engender trust. Standards and data exchange, which arguably are the easiest of the many requirements in this article to meet, should be the health field’s top priority. Adopting structured data can come next. And when we start to uncover trends that make a difference to patients, that prove which treatments are effective, and that provide the basis for fair reimbursements, we can create a health care system everybody trusts.
This article was written by Andy Oram and first published in O'Reilly Strata. It is being republished by Open Health News under the terms of the Creative Commons License. The original copy of the article can be found here. |
- Tags:
- actionable data
- Amara Health Analytics
- antibiotic-resistant infections
- Blue Cross Blue Shield of Massachusetts
- Capsule
- Centers for Medicare & Medicaid Services (CMS)
- clinical data
- clinical research
- Clinical Vigilance for Sepsis
- coordinated patient care
- Daniel Cane
- data exchange
- David Blumenthal
- Department of Health and Human Services (HHS)
- device data
- Electronic Health Record (EHR)
- Electronic Medical Assistant (EMA)
- FDA regulation
- fee-for-value
- health care
- health data analytics
- health data exchange
- health data silos
- health IT (HIT)
- health quality improvement
- healthcare cost containment
- ICD-10
- Internet of Things (IoT)
- machine learning
- Meaningful Use (MU)
- Medical Device Information System
- Michael Sherling
- Modernizing Medicine
- natural language processing
- orphan diseases
- O’Reilly Strata Rx conference
- patient engagement
- personal medical devices
- pharmaceutical companies
- Practice Fusion
- Rachel Kalmar
- risk stratification
- semantic analysis
- Shahid Shah
- SmartLinx Medical Device Information System
- Software-as-a-Service (SaaS)
- Stuart Long
- system intelligence
- Login to post comments