Login

Companies Deployed AI Without Checking If It Works

Here’s something that stuck with me. 89 percent of enterprises have adopted AI tools. But only about 23 percent are getting measurable value from them.

Four out of five AI projects are kind of… working. Or not. Nobody really knows.

How did this happen? How do four out of five companies deploy AI without actually checking if it’s working?

I think it’s because AI adoption happened too fast. The hype was so strong that companies just… started using AI. They didn’t go through the usual decision-making process.

Normally when companies adopt a new technology:

  1. Identify a problem
  2. Research solutions
  3. Pilot test
  4. Measure results
  5. Scale if it works

With AI, companies did:

  1. Heard about AI
  2. Thought “we need to do AI”
  3. Started using it
  4. ??? (we’re still here)

The measurement and the “does this actually solve our problem” part got skipped. Or it got buried somewhere in the middle of the project.

This is creating a weird dynamic. Companies are investing heavily in AI. They’re hiring consultants. They’re buying licenses. They’re experimenting.

But they’re not systematically asking: “Is this better than what we were doing before?”

I think this is actually the biggest opportunity in the AI market right now. Not for AI vendors. For AI auditors. Companies that come in and help you measure whether your AI initiatives are actually working.

Right now, companies are flying blind. Someone who can give them clarity—“Your chatbot is working. Kill that recommendation engine project”—would be incredibly valuable.

The measurement problem is going to be more valuable than the implementation problem for the next few years.