profile

Model Thinking

Why content modeling matters in the age of AI ... plus, fluid v. crystallized intelligence


Issue 12

Structure

Quick thoughts about how content lives in systems

In past issues, I’ve talked about different ways to think about content models (such as blobs versus chunks, the fidelities of content models, the power of content relationships, or granularity).

However, I haven’t talked much about why content modeling matters. Let’s fix that.

Here’s a few reasons why content models are important:

  • All content has structure—some content more than others.
  • Content models make it possible to put content into a content management system.
  • They create consistency, which UX designers and developers need and users expect.
  • They make it possible to scale content.
  • Models enable repeatable processes.
  • Content models let content strategists bring new content strategies to life without cramming new content into content types that are meant for something else.
  • They start to encode meaning into content, supporting richer content experiences.
  • They lay the groundwork for artificial intelligence (AI) and a more knowledge-centric discipline.

Let’s unpack that last one.

Content models lay the groundwork for powerful AI.

Imagine we’re working in a file system. We’re in a folder with seven text files. (That’s a layer of structure to the content already.) Each text file contains a small amount of content. Here’s the contents of the seven files:

  • John Collins
  • 7-11
  • Dairy
  • 100 Main Street, Anytown, USA
  • 200 Broadway Boulevard, Anytown, USA
  • Convenience store
  • Milk

Looking at those, we don’t really know why that content belongs in the same folder. We see some things there that might group together, but we don’t know the meaning. Now, if a “robot” (how I used to talk about AI before we all talked about AI) was looking at that folder, it might also see some groupings. It’s burning up computer processors and still guessing.

Now, imagine we’re working in a content management system (CMS). The same content is in the CMS. But since it’s a CMS, which is usually a relational database deep down inside, there are links between some of the six content types.

  • John Collins connects somehow to milk and to 100 Main Street and 200 Broadway Boulevard.
  • 7-11 connects to convenience store, to dairy, and to 200 Broadway Boulevard.
  • Milk connects to dairy.

Better, right?

We start to get a better sense of how the pieces relate to each other, and we might start to understand their meaning a little better. Our bot friend, likewise, appreciates the connections. It doesn’t understand why the link exists, but it sees it. The bot is happier, and it’s using less computing power.

This is what we’re doing when we content model: creating structure and relationships.

I like to view it as the foundation upon which the next level of sophistication comes, and it’s where content is going and needs to go in the age of artificial intelligence.

The emerging alternative to file systems or database-based systems is what’s called graphs. For content folks, you specifically need to be focusing on knowledge graphs. To greatly simplify: in graphs, you have objects (content types and entries) that are connected by explicitly defined relationships, forming a grammatical-sounding subject, predicate, object, known as a triple.

As a graph, our example shows us this:

  • John Collins needs milk.
  • John Collins is located at 100 Main Street.
  • John Collins is near 200 Broadway Boulevard.
  • Milk is a type of dairy product.
  • 7-11 is a convenience store.
  • Convenience stores sell dairy products.
  • 7-11 is located at 200 Broadway Boulevard.

Now, both the humans and the AI bots know a lot more and can apply logic. So when John Collins asks “Where can I buy milk near me?”, the system can say “Check 7-11 at 200 Broadway Boulevard.” The graph structure encodes content, knowledge, and data. So the AI can “reason” and not guess or hallucinate—with less computing effort.

This example is based on the search functionality that Google Maps is based on. Initially, it was built by hand with a lot of human verification. Now, the graph is built heavily by AI.

Content professionals are well suited to understand meaning and structure and build the basis for AI.

Strong content models make strong knowledge graphs. Strong graphs lead to better, more explainable AI.

The only way out of this dilemma is a fundamental reengineering of the underlying architecture involved, which includes knowledge graphs as a prerequisite to calculate not only rules, but also corresponding explanations.

 

The Knowledge Graph Cookbook: Recipes that Work (Kindle/paperback) by Andreas Blumauer and Helmut Nagy, on the need for explainability to trust AI

Scuttlebutt

News from the UX design, content strategy, and content management communities

The digital experience platform (DXP) Sitecore recently named its fourth CEO since 2017 and second in a year. Originally founded as a CMS, the company has been trying to navigate industry shifts that include “headless” (where the display of content is separate from the content itself) and “composable” (where customers piece together the tooling they prefer instead of buying a single solution) as well as AI. CMS industry experts in the article and on LinkedIn suggest that the broader DXP focus for the company may be causing difficulty.

Top of mind

Things that are bouncing around in my head as I synthesize a range of ideas

Several years ago, a friend and colleague sent me a link to a podcast (the likes of which I cannot resurrect) that has really stuck with me. I keep coming back to as I think about where I want to go with my career and my business. The podcast talked about the ideas of fluid intelligence and crystallized intelligence.

My simple explanation of the two:

  • Fluid intelligence: your basic in-the-moment problem-solving
  • Crystallized intelligence: knowledge you gain over the years of applying fluid intelligence

Apparently, fluid intelligence peaks around age 27. Crystallized intelligence increases slowly, stays pretty stable and then starts to decrease around age 65.

The podcast gives the example of lawyers. A young lawyer is actively working cases, applying fluid intelligence in the courtroom. But as they age and their career advances, they become a partner in their law firm and they don’t spend much time in the courtroom. They start mentoring younger trial lawyers.

I’m well beyond that peak of 27 😱. I love a good problem, and in fact, I am very interested in some deep problems (like content management). But I wonder:

  • Am I leaning more on crystallized intelligence in my problem solving?
  • Should I be leaning into my crystallized intelligence more as I try to build a business?
  • What does it look like for me, with my background, to lean into crystallized intelligence in my next step, whether working for myself or for someone else?

John Collins

Thanks for reading!

Did someone forward you this email? Subscribe here

If you’re already a subscriber and you found value in something here, tell your friends and colleagues to subscribe!

Welcome to the 12 new subscribers who have joined us since the last issue of Model Thinking.

(As an Amazon Associate, I earn from qualifying purchases via affiliate links.)

Model Thinking

Whether you’re an executive who wants a content management system that enables business growth or a content professional looking to improve your content strategy and content modeling skills and grow your career, Model Thinking will help you learn, connect some dots, think differently, and get actionable tips.

Share this page