Anyone who has wrestled with data architecture and information management on an enterprise scale understands the problem: things take a long time!
Any time a new data shape or source appears, changes will be required. Any time there is a new interpretation of a particular data element, changes will be required. Any time an action is required because of an interpretation, changes will be required.
No one change is inherently difficult. Coordinating many changes across many touchpoints is very difficult.
So, it takes a long time and is very expensive, not unlike rewiring your house – but having to do it on a perpetual basis.
Not to sound trite, but in the digital era, constant change is the new normal. Agile responses are demanded to new data, new interpretations, and new actions. And while many organizations might be agile in some aspects, few are agile around facts and what they mean.
So, what does good look like? It’s being agile with data and its meaning.
Data agility is the ability to make simple, powerful, and immediate changes to any aspect of how information is interpreted and acted on.
Data agility is important in any digital business model. Much like a physical business wants to be agile in regards to supply chain, manufacturing and distribution, so does any digital one.
The reason is simple: facts and what they mean can change, often very quickly. Many examples abound of competent organizations that were caught flat-footed when facts and meanings changed quickly.
For example, the concept of “Russian investments” means something quite different now than it did a short while ago.
Anytime you are managing risk, you want data agility.
Life sciences may need to pivot their focus quickly in light of new information and health care concerns, such as a global pandemic.
Intelligence agencies may need to quickly reposition their information security posture after a new development, such as a public leak.
Once you get past the headlines, there is a vast universe of more pragmatic concerns: getting new products to market faster, taking exceptional care of customers, managing risk in better ways, and so on. All demand data agility.
Without a better way to store, share, manage and protect data along with everything we know about it, data agility will remain an elusive goal for many.
Many organizations will invest in three areas.
The first is a data layer, usually a mix of internal systems. This is how things get done on a granular level.
The second is an integration layer that tries to make the disparate pieces work more as a whole: both from a workflow perspective, but also from a reporting perspective.
The third is an interpretive layer of knowledge, guidelines, dictionaries, ontologies, knowledge graphs, and other artifacts that help people interpret what the data might mean in context.
The challenge is simple: these are disconnected investments. Facts live in one place, while their meaning lives somewhere else, or perhaps in people’s heads.
It will never, ever be agile when done this way.
By deeply integrating data (digital facts) with what we know about the data (encoded metadata) along with what the facts mean (semantic interpretations), data agility is created.
Data agility is created by connecting active data, active metadata, and active meaning.
Depending on which aspects of the enterprise data challenge one has personally wrestled with, the transformative results are – well – magical. It would be inaccurate to describe it differently, unless you prefer words like transformational, revolutionary, game-changing, and so on.
By removing substantial friction, new things are now possible.
The ideal situation is any use case that has (a) smart people making decisions of consequence by interpreting complex data, or (b) important organizational knowledge that needs to be used everywhere, updated, kept compliant, etc., or (c) likely combinations of both.
If one were to take horizontal slices – that is, problems that everyone has – one could start with any aspect of information security.
With data agility, any infosec policy that can be thought of can be implemented – immediately and verifiably. Thinking more broadly, good infosec demands being able to interpret the meaning of facts quickly and authoritatively. As does being able to act immediately.
Next up, the vast landscape of data warehouses, marts, shares, reporting systems, and similar. The facts are there, but what do they mean? Data agility creates the capability to quickly construct individualized lenses on shared facts, and what they mean to each user.
Perhaps the next best industry target would be the burgeoning investment of analytical tools and platforms. Again, no shortage of facts for people to analyze – but is there shared and trusted understanding of what the facts mean in context?
Shifting to specific industries, once again we have a deep well of fascinating use cases.
Anytime incoming information has to be immediately interpreted and acted on are always strong candidates.
Anytime organizational knowledge is itself underpinning the product or service is also a strong candidate.
And we often see both together.
Intelligence agencies, security, fraud, and other threat-detection functions. Financial services, life sciences, aerospace, and other areas where innovation and risk must be balanced.
There is no shortage of examples in the private and public sectors that want to get much better at interpreting facts and what they mean. They want to be able to make simple and powerful changes to how information is interpreted and acted on.
They want data agility.
If you’ve ever worked in a larger organization, you realize that there are things that might get done, and things that can’t get done.
You want to spend your time working on the former, not the latter.
Someone sees that there might be a better way to do things. But enormous friction makes it fall into the category of “can’t get done” – at least for now.
Perhaps someone will try and integrate a “solution” around the existing pillars, and that fails to reduce friction as well.
The historical evidence is that the new model – a semantic data platform – is almost always introduced into a compelling use case where all others have tried and failed. By keeping active data, metadata and meaning together, it delivers outsized results in a surprisingly short amount of time.
People notice, and are impressed. Another use case follows, and then another. After a while, a footprint is created of different functions and groups who are assembling and reusing organizational knowledge to create new business processes. As a result, they can easily share digital facts and what they mean amongst themselves.
New patterns of information management and knowledge sharing start quickly replacing old ones. New things are now possible, as substantial friction has been removed.
Some might think this is a technology argument – that there is now a better way to manage both data and its various interpretations, and that would be correct. But it also is a leadership argument – that any organization needs a better way to manage facts and what they mean.
If you are on the front lines of helping to shape modern digital business models, you will want to know that there are new ways of doing things.
We’ve talked about the importance of facts and what they mean. Connecting active data, active metadata, and active meaning creates data agility. Data agility is the ability to make simple, powerful, and immediate changes to any aspect of how information is interpreted and acted on.
Data agility is important literally everywhere you look, with the only variable being “just how important is it?” There are powerful adoption patterns both horizontally and vertically, as shared knowledge about data can be a useful thing.
As a result, the early adopters are now managing information in a new and very different way than their peers.
This new way could be described as knowledge-centric vs. data-centric. In their eyes, what is known about the information becomes more important than perhaps the information itself.
Going forward, we’ll spend some time looking at the inner workings of a semantic database platform, and how it is substantially different from familiar data, metadata, and semantic tools. You’ll see familiar concepts, just used in a new way.
We also want to spend some time on organizational impact, and lessons learned. Anytime a new way of doing things is introduced into an organization, heavy lifting is required.
Finally, there are wonderful implications for current and future digital business models, which are very much worth exploring.
Download our white paper, Data Agility with a Semantic Knowledge Graph
Like what you just read, here are a few more articles for you to check out or you can visit our blog overview page to see more.
Learn about data bias in AI, ways technology can help overcome it, why AI still needs humans, and how you can achieve transparency.
Successfully responding to changes in the business landscape requires data agility. Learn what visionary organizations have done, and how you can start your journey.
Sharing data can be relatively easy. Sharing our specialized knowledge about data is harder – and current approaches don’t scale.
Don’t waste time stitching together components. MarkLogic combines the power of a multi-model database, search, and semantic AI technology in a single platform with mastering, metadata management, government-grade security and more.Request a Demo