"We're doing AI anyway - how much measurement is too much?"

I heard this comment just today came during a transformation planning session.

It came from a smart operator whose both visionary and tactical. The sentiment was this.

"Why create more work when we're already committed to moving fast?"

Fair question. We love thoughtfulness. We love pragmatism, and resourcefulness.

Background: The menu of key performance indicators on the table to measure the impact of AI in different domains of value seemed like overkill, and we're discussing:

What's too much?

What's too little?

Especially when it could mean a significant lift to build them into a dashboard.

Here's what I told them.

(And why Gartner's new CMO data validates this uncomfortable truth):

Measurement capability determines AI scaling success in the long run, not initial deployment.

Fresh Gartner research which I'll link to for you below surfaced this stat.

94% of CMOs deploy GenAI.

Yet 87% still fail at basic marketing fundamentals.

The high performers (top 19%) don't win with shinier AI tools.

They win because they solved the boring problems first:

→ Integrated governance frameworks

→ Business-aligned measurement systems

→ Cross-functional stakeholder alignment

→ Systematic capability assessment

My recommendation was intentionally minimal...

Pick one thing.

Pick one metric that accurately measures where GenAI is having impact.

Do a light lift. Spend a day building a light BI dashboard. We've got to build that muscle.

Anything you can't do forever is by definition, unsustainable.

-David Attenborough

Because one of the biggest failures at scale is not measuring value.

That's a capability we must build for the future.

Even if stepwise.

Even if we don't base decisions on those measures just yet.

Here's the strategic reality...

AI amplifies existing capabilities.

Without solid foundations, you're just allowing disorganization at scale.

This pattern extends far beyond marketing.

Organizations rushing AI deployment while bypassing foundational measurement capabilities are creating technical debt that compounds at scale.

Sometimes the most courageous thing a leader can do is slow down.

Acknowledge what needs fixing.

Build one foundational piece at a time.

Master the basics. Then scale.

As a marketing leader, I'm curious to know: What's your POV on approaching measurement infrastructure in your AI initiatives? What are you testing and learning? How would you approach it if building measurement infrastructure took time away from actual initiatives?

P.s. I'm linking the latest Gartner report here. As an AI Inner Circle member, you can always access my public facing report repository.

P.s.s. The Enterprise AI Method launches in 42 days. Pre order the digital version in exchange for a free gift.

Previous
Previous

The Conversation About AI That Changes Everything (and how to built it into your strategy)

Next
Next

The Day My Marketing Budget Vanished