When I first joined a healthcare consulting practice in 2010, our team leaders were incredibly excited that health data would usher in a new era of improved care and clinical innovation. Data, they felt, was the key to unlocking a healthcare revolution. Data would turn mere patients into smart healthcare consumers. It would help move the industry from an inefficient, antiquated fee-for-service model towards a cost-effective, outcomes-based model. The focus on outcomes would promote competition, drive innovation, and improve the quality of care.
I left healthcare in 2013, but I am now back in the industry, and working for a database company no less. What am I hearing about healthcare data today? That it is going to enable a new era of improved clinical outcomes. That it is going to usher in a new era of smart healthcare consumers. That it will ultimately be used to support outcome-based payments. This blog is the first of a two-part series looking at why data-driven healthcare is so hard to attain.
I can almost hear longtime healthcare people laughing at my naiveté. You expected substantive change in six years? This is healthcare! And indeed, when I recently spoke to the Chief Medical Officer of my new company, he told me he has been hearing these same themes since the early 1990s, when he first joined the managed-care movement. Data-driven clinical improvements. Smart healthcare consumers. Payments based on quality of outcomes. And so on.
When you say it’s going to happen now
When exactly do you mean
See I’ve already waited too long
And all my hope is gone”
This anecdotal history is not meant to suggest that healthcare isn’t edging towards those very laudable goals. On the contrary, that is the point: healthcare is edging – about as slowly as it can under the force of great legislative and regulatory pressure. Despite some measurable signs of change (greater numbers of insured, increased EHR adoption, record levels of merger activity), the overall pace is painfully slow.
Figure 1. The many point-to-point integrations slow progress.
Those of us working in or around the industry often feel like we are walking through a dense fog. We continually see a small clearing in front of us, but when we reach that clearing, there is just a new clearing the same distance away, and still thick fog all around. Some minor details of the scenery change, but the overall territory remains mysterious and forbidding. Where are the smart healthcare consumers? Where is the pay for performance? There are rumors, anecdotes, perhaps even a close-at-hand example. But the fog never lifts and the allegedly changed landscape is never revealed.
Trying to understand the healthcare industry’s intense resistance to change has brought many brilliant people to grief. It is a complex problem of alignment, incentive, entrenched interests, and ingrained institutional culture. The only thing history reliably teaches about sweeping institutional change is that it finally comes when no one expects it, due to the sudden removal of barriers that only a few saw or understood. We can’t determine in advance when change will happen, but we can nonetheless ask: what might the critical barrier be?
This brings us to the topic of relational data. I believe that when the history of healthcare change is written in the future, relational data will turn out to be one of those semi-invisible technological barriers that few saw and that fewer still understood the importance of. I believe that the relational model is what keeps healthcare data locked up, isolated and unused, forestalling the promised healthcare revolution. There may be other factors blocking data sharing, like privacy concerns and organizational rivalries. But these factors pale next to the basic difficulty of exchanging and combining data that is stored in relational databases.
Figure 2. Extract, Transforms and Loads (ETL) operations.
Over the course of 40 years, the relational database has become so common, so ubiquitous, that we no longer think about the trade-offs associated with using it. Relational technology was designed to solve some very specific problems of 1970s computing. First, it provided a uniform method for accessing data. Before the RDBMS, most applications were tightly coupled to the databases they used; accessing data meant integrating with the application that created or stored it. Relational data provided a single, direct method of access instead. Second, the RDBMS helped eliminate redundant and unnecessary storage. Storage in the 1970s was approximately three million times more expensive than it is now, meaning that every redundant piece of stored data had a significant cost. The relational model let organizations precisely model and normalize every data field, eliminating such redundancy.
Fast-forward to 2016, where we no longer have our original 1970s constraints. Applications are object-oriented and have multi-tier architectures with loose coupling between user interfaces, application services and data services. They communicate with each other via service interfaces, both within and across organizations. Storage has become mind-bogglingly cheap. And of course, the Internet and mobile have combined to create ever-greater quantities of every imaginable kind of data.
In this atmosphere of accelerating, modernizing, streamlining technology, the relational database stands out like a retro polyester tuxedo. It fits badly with everything around it, including today’s improved agile development practices. Today’s agile teams have learned to “deliver early, deliver often” in order to reduce uncertainty and risk. But they have had to work around relational databases to do so. The RDBMS almost dictates a waterfall approach insofar as it requires teams to model up-front and know every conceivable question that will be asked of the data in advance. If the questions change, the data must change – and changes equate to time, expense and missed opportunity.
Using an RDBMS also means doing lots and lots of extract, transform, and load (ETL). A recent study by TDWI estimated that ETL operations (Figure 2) account for 60-80 percent of the cost of a data warehousing project. In other words, most of the money is spent simply converting data and moving it around.
You’d think business and clinical sponsors would be up in arms about expensive data-moving projects that never produce business or clinical results. And often they are. But they don’t have any idea whom or what to blame. Often they simply blame their IT department, or the application vendor, or whomever else is in range. Very few ever think to blame their organization’s databases. In Part II of this post, we’ll see why they perhaps should.
Like what you just read, here are a few more articles for you to check out or you can visit our blog overview page to see more.
Learn about data bias in AI, ways technology can help overcome it, why AI still needs humans, and how you can achieve transparency.
Successfully responding to changes in the business landscape requires data agility. Learn what visionary organizations have done, and how you can start your journey.
Sharing data can be relatively easy. Sharing our specialized knowledge about data is harder – and current approaches don’t scale.
Don’t waste time stitching together components. MarkLogic combines the power of a multi-model database, search, and semantic AI technology in a single platform with mastering, metadata management, government-grade security and more.Request a Demo