profile

Model Thinking

AI is changing collaboration (and not for the better)


Issue 36

AI is changing collaboration (and not for the better)

We’ve all seen articles about problems with AI hallucinations, the angst about job impacts, and the hustle-bros preaching to automate everything with AI. There’s another problem looming, though, and far fewer people see it.

I’m thankful for former colleagues D. Keith Robinson and Kevan Lin who have voiced this concern that I was also starting to talk about with folks in 1:1 calls and small groups.

AI is taking away our ability to collaborate.

Level setting

I’m not anti-AI. I’m anti-AI-allthethings-without-thinking-through-unintended-consequences.

Years ago, I made small contributions to a natural language processing (NLP) project, long before people were talking about large language models (LLMs). I’ve worked with machine translation (MT), and I’ve structured content to be both human and machine readable for many, many years.

As a solopreneur, AI has been a huge part of my journey. I use it for automating, ideating, outlining, and reviewing my work. I’ve used it for building tools. My business ventures would not be where they are without AI.

For clarity: I do not use AI to write consulting deliverables, and I do not let LLMs write Model Thinking. This is bespoke, hand-rolled thinking. (I do use LLMs for help with subject lines, preview text, and SEO descriptions, and image generation.)

What does collaboration look like in AI?

For most of the last decade in the software industry, collaboration has taken the form of standing at a whiteboard with people, maybe with loads of sticky notes and Sharpies. Or a Zoom call or a Miro board. Maybe a Confluence page with lots of comments.

Lots of people made lots of inputs to collaborate.

Now with AI, everything shifts to outputs. You remix something that someone else generated. You get code out of a Git repository.

It’s no longer collaborative, it’s iterative.

You might point out that ChatGPT offers group chats. Honestly, I haven’t used a ChatGPT group chat, but I struggle with the chat-based interface of many LLM tools as a solo user. The responses spool up so many threads with outputs and different follow-up questions that quickly get buried as the chat continues. I imagine a group-based setting would be even harder to navigate.

I worked with Kevan Lin several years ago. I was a content designer and Lin was a UX designer. He recently posted, asking about the artifacts of product work.

“What’s the layer that we’re collaborating on?” Lin asked. Speaking of vibe-coded prototypes, he dialed in, asking “What happens to these prototypes after [they’re] done?”

D. Keith Robinson wrote about this also, calling it “the straight-up de-teamification and de-personification of work.”

The echo chamber problem

While I think part of the problem is the tooling doesn’t make collaboration possible, Robinson goes another direction that is equally valid. “People, IMO, are doing more to damage our ability to work together than AI is,” he said.

The tech world is full of layoffs and job security angst as AI rides the hype cycle. I’ve seen first-hand a shift from being a team player tackling gnarly problems to being an individual contributor needing to justify their existence.

Which I think feeds into what Robinson is calling out.

“I feel like all I see anymore is *I* did this. Look at *me* and what *I* did.
For me. Me. Me. Me.”

That “me, me, me” mentality leads directly into a second-order problem that I don’t see much talk about. I call it the echo chamber problem.

An echo chamber is an environment filled with things that reinforce existing positions.

If we look at social media as an example, we can clearly see the dangers of sophisticated tech-enabled echo chambers. When lots of like-minded people connect on social media, their algorithm feeds them content that it knows they like seeing. It feeds their confirmation bias.

That social media echo chamber limits a robust debate about the pros and cons of complex topics. We know that the internet is not a great medium for nuanced communication and that flame wars often ensue. But at least there’s (theoretically) the check-and-balance of knowing that one’s social media posts will be seen by others.

And this is where the chat-based AI interface becomes a multi-dimensional echo chamber.

What was once a team with some level of diversity has increasingly become individuals, breaking off on their own to do work. We’ve reduced the diversity of experience and thought that comes as teams work to understand root problems and then begin building solutions.

The echo chamber is even worse, though, because not only have we isolated the worker and removed that diversity, but we’ve replaced it with an inhuman response that is predisposed to affirm our ideas and direction.

The future of collaboration

I often see both sides of issues, and this is no different.

My pessimistic side thinks that there is some sort of AI-induced collaboration collapse coming.

Maybe the collapse takes the form of a glut of self-built tools for narrow use cases (that may or may not work). Maybe it looks like LLM-generated busywork that reaches a comedic equilibrium of uselessness (like the Translation Party of days gone by). Or maybe it’s a dearth of enterprise tools that solve real-life needs. Maybe it’s a society that is even more challenged to have nuanced debate.

My optimistic side sees this challenge as a market opportunity for the AI tools to tackle. How might we build in collaboration prior to the input stage of using AI tools? I certainly don’t have the answer, but if anyone wanted to talk about this, I’d be keen to hear about it.

I think people are better than all of this, and we’re not doomed to a dystopian future. Let’s talk more, let’s chop up ideas, let’s defeat the algo.

 

D. Keith Robinson on LinkedIn

Don’t make a six-figure mistake!

Before you embark on an AI transformation or a CMS replatforming, make sure you have the fundamentals in place. My readiness assessment tool looks at six dimensions to identify blind spots and put you on the path to shoring up those weaknesses before you spend money on the wrong tool—or a tool that won’t fix your problem.

Try it today at www.collinscontent.com/cms-assessment!

Model Thinking

Model Thinking is for people who work where content, systems, and design meet.Each issue connects ideas across content strategy, content modeling, and content management system design with a focus on what actually works in practice.

Share this page