We’ve joined forces with Smartlogic to reveal smarter decisions—together.

How Do We Know What We Know?

We all believe there are things we know, and things we don’t. But how do we decide between the two?

One theory is that we see patterns that are then reinforced using feedback: the more a “pattern” of knowledge works to explain things, the more we’re inclined to “know” it and want to apply it. Predictive and explanatory patterns get reinforced, unproductive patterns are re-evaluated in a search for better ones.

Viewed this way, it’s a pretty good mechanism – but not perfect. Indeed, the literature is full of how we as humans have all manner of pre-wired biases that make this harder in practice than it might sound. For example, about 15-20 years ago we started to realize that data scientists might have a better view of facts and interpretations than the HPPO in the room: the Highest Paid Person’s Opinion. It turned out that facts and interpretations matter.

Shared Organizational Knowledge

Organizational knowledge has many of the same challenges. Our organizations “know” things, but – exactly how do we decide between “what is known” and otherwise? When presented with new information, how might our explanatory models change, avoiding as many human imperfections as we can?

There is wide consensus that sharing, improving, and reusing organizational knowledge is best done using a semantic knowledge graph, or SKG. This data structure encodes the facts, relationships, meanings, and interpretations that collectively represent knowledge.

Any organization that is built on a common substrate of shared organizational knowledge – along with the data used to create that knowledge – is in an enviable position, competitively speaking. They can learn and apply new insights faster, and remember past experiences more effectively.

These organizations tend to succeed for many of the same reasons really smart people tend to succeed – they’re intellectually agile.

Although SKGs are certainly becoming more popular, many of the proposed approaches are essentially untethered from data reality. Clumps of abstract, codified knowledge are created, evolve, and are deprecated for one primary reason: they are not well-grounded in the observed data and known facts that were used to create the knowledge they attempt to encode.

It’s like reading someone’s opinion of my lab results vs. the lab results themselves. How do you know what you know?

In the case of my latest lab results, I can hop on to various websites, and figure out the context and what it might mean. I also bring considerable context – knowledge of my personal situation – in evaluating those lab results.

Maybe I agree with your generic lab work assessment, and maybe I don’t. To the extent you are offering your assessment – and can show your work – it’s helpful. I might even follow your advice.

Not to dwell on interpretations of ordinary lab results, but – it’s fair to point out that the medical community’s viewpoint on which tests, why they matter, what they mean – evolves considerably over time. New knowledge is created from old. In my situation, this is somewhat important, as there seem to be two different approaches to the whole lab work thing in my community, and I want to make an informed choice.

Feedback Loops

Science is mostly about feedback loops: propose a theory, test it, measure results, make conclusions. A popular ad line for shampoo used to be “lather, rinse, repeat”. If you do science well, you can show how you got there: evidence in the form of data and reasoning applied.

Knowledge encoded in this way makes it very reusable and thus appealing.

Pragmatic aspects aside, I don’t understand why we would want anything less rigorous for our continually learning organizations. Useful insights should be well-documented and explainable to anyone at any time, along with the data used to create them.

“Black boxes” – functional mysteries, undocumented assumptions, “here be dragons”, etc. – should stand out like a sore thumb.

Back to science, it’s done against a shared, reusable context of what is already known and largely agreed on. That shared knowledge can change and evolve – that’s the point – but there is a well-established baseline of facts and applied reasoning that serves as a starting point.

Without that, any scientific activity is largely fruitless.

I’ll go as far as stating the obvious: without a shared, grounded facts+interpretations foundation, meaningful organizational learning at scale is near-impossible. There’s nothing really to “learn” from, in the sense of testable assertions and the facts used to create them. As a result, opinions – by themselves – tend to be treated as guilty until proven innocent.

What knowledge there might be usually hasn’t been well-codified, or is logically separated from the data and facts used to create it. Maybe it’s being done in organizational pockets, far from the mainstream. Maybe it’s not in reusable form.

Put that way, it’s not a great way to make progress.

The Human Angle

We like to know things, and get uncomfortable when we don’t. Think back to the last important meeting when someone asked a really big question, and no one had an answer.

Because we feel better when we think we know things, we naturally tend to defend that state of affairs. We instinctively challenge any new data that doesn’t fit our model well, and very often the motivations of the people who are presenting it to us. But unless we collectively work towards a new, shared explanatory model, progress will be difficult.

Something interesting happens when the shared goal shifts to encoding and reusing organizational knowledge. It’s not “who’s right, who’s wrong” anymore.

It quickly reorients to “what do we know, how do we know it – and where can we use it?”

Answering that question leads you to wanting a semantic knowledge graph and the data that created it. Being able to rewind a complete history of past inferences made is also very helpful, almost like a Wikipedia page that describes how you got here. Back to feedback loops – it’s easy to understand why you thought you knew what you knew at the time.

“Things that don’t fit” drive meaningful feedback, as anyone who’s sat through a Six Sigma class will understand. Outliers should spark curiosity, not frustration.

Scaling Shared Knowledge

In my enterprise IT vendor world, larger organizations are justifiably paranoid of smaller, more agile ones eating their lunch. Having worked for both smaller and larger ones, smaller ones definitely have an advantage in interpreting on-the-ground facts, building shared knowledge, and acting quickly on it.

They know what they know, and how they know it. No formalization is needed, as it’s a small team. That being said, their small size inherently limits their ability to make a big industry impact.

Larger IT vendor organizations can certainly move the needle, as they are big. Where they struggle is in interpreting on-the-ground facts, interpreting them, making assertions, acting on them, measuring results, making changes, and so on.

They are much less sure about what they know, or how they know it. The larger the organization, the more endemic this situation becomes. Hopefully there is strong, visionary, and charismatic leadership in place that can show the path through the wilderness. If not, bad things may result.

At least for the industry segment I’ve always worked in, I would argue that scaled organizations demand scaled knowledge.

What about yours?

Chuck joined the MarkLogic team in 2021, coming from Oracle as SVP Portfolio Management. Prior to Oracle, he was at VMware working on virtual storage. Chuck came to VMware after almost 20 years at EMC, working in a variety of field, product, and alliance leadership roles.

Chuck lives in Vero Beach, Florida with his wife and three dogs. He enjoys discussing the big ideas that are shaping the IT industry.

Start a discussion

Connect with the community

STACK OVERFLOW

EVENTS

GITHUB COMMUNITY

Most Recent

View All

Facts and What They Mean

In the digital era, data is cheap, interpretations are expensive. An agile semantic data platform combines facts and what they mean to create reusable organizational knowledge.
Read Article

Truth in ESG Labels

Managing a portfolio of investments for your client has never been simple - and doing so through an ESG lens raises the complexity to an almost mind-boggling level. Learn the signs your team has hit the wall with current tools - and how a semantic knowledge graph can help.
Read Article

4 Signs You’ve Got a Transaction Reconciliation Challenge

Many firms manage transaction reconciliation using smart people armed with spreadsheets - but that doesn't scale well. Learn what to look for, to know if you're creating new forms of risk for your firm.
Read Article
This website uses cookies.

By continuing to use this website you are giving consent to cookies being used in accordance with the MarkLogic Privacy Statement.