(Part 3) Moving AI Forward in 2026 Without Boiling the Ocean

Jan 29, 2026

End of January coming up, so… happy new year! That will be the last time I write that till next year since I’m probably well past the statute of limitations. 

So, by now, a pattern should be clear.

In Part 1, we reframed AI as a tool to accomplish specific jobs-to-be-done, not a goal in itself. 

In Part 2, we talked about the foundational work required to turn raw data into context–so AI can actually help rather than confuse.

The natural next question I hear from leaders is:

“This all makes sense. But what do we do now?”

Not next year. Not after a massive re-platforming. Not after hiring an entire AI team. Now.

Progress beats perfection in operations

One of the biggest traps I see organizations fall into is waiting for the “right” AI moment. The perfect data. The perfect architecture. The perfect agent. The perfect roadmap.

Operations don’t work that way. Knowledge doesn’t work that way. 

Asset portfolios don’t pause while strategies mature. Equipment keeps failing. Weather keeps changing. Customers keep asking questions. Teams keep making decisions every day, whether the system is ready or not.

Progress in 2026 will not come from bold AI declarations. It will come from reducing friction in the decisions teams are already making.

Start where the work already exists

The fastest way to stall an AI initiative is to start abstract. The fastest way to move forward is to start with the work your teams are already doing today. 

Examples I hear constantly:

  • “What sites actually need attention this morning?”

  • “Explain why this site is underperforming without digging through five tools.”

  • “Prepare a clear update for the customer.”

  • “Decide whether this issue is worth another truck roll or we should replace it.

These are not AI problems. They are jobs-to-be-done. And they already have owners, workflows, and pain points.

If you can make one of these jobs meaningfully easier, you’re moving AI forward–even if no one calls it AI yet.

Human-in-the-loop is not a failure mode

There’s a lot of talk about autonomy when it comes to AI agents. In practice, autonomy is not where most organizations should start. That’s okay.

In managing a portfolio or fleet of assets:

  • Exceptions are common.

  • Context changes.

  • Constraints matter.

  • Accountability still sits with people.

Human-in-the-loop isn’t a temporary crutch. It’s how trust gets built.

When teams can see why a system is surfacing an issue, ranking a site, or suggesting an action, they engage with it. They correct it. They improve it. Over time, the system gets better because it’s learning from real decisions—not hypothetical ones.

Where AI tends to help first (realistically)

In 2026, the most effective uses of AI I’m seeing are not fully autonomous agents running the show. They’re systems that help people think faster and more clearly. It’s taking finite context and leveraging LLMs like ChatGPT or Gemini.

That often looks like:

  • Summarizing portfolio health across sites

  • Highlighting anomalies worth attention

  • Explaining issues in plain language

  • Surfacing patterns humans would miss

  • Drafting reports or recommendations for review

These may sound modest, but they’re not. And, they provide great value for the communicator and the receiver. And remember, typically messages don’t just end with the receiver–they get shared. So, any simplification and ability to distill problems (or whatever needs communicating) can have exponential impact. (This is a poke and reminder of the Telephone Game.)

They reduce decision load. They save time. They improve consistency. They upskill teams quietly, in the flow of work.

That’s how AI earns the right to do more–small wins earning trust.

Don’t forget the people side of the equation

One thing that still doesn’t get enough attention in AI discussions is the existing workforce.

Asset managers, O&M coordinators, analysts, and technicians are already acting as agents today–interpreting signals, making tradeoffs, and deciding what happens next.

AI initiatives are at risk of stalling when ignoring people:

  • upskilling

  • change management

  • trust

  • …how they actually work

A strong intelligence and context layer doesn’t just prepare data for AI. It prepares people for better decisions. It shifts time away from data admin and toward leveraging data for judgment, prioritization, and value-add work.

That’s not a side benefit. That’s arguably half the value right there.

What “good progress” looks like by the end of the year

By the end of 2026, success doesn’t need to look like a fully autonomous agent running operations.

Success looks like:

  • fewer alerts driving more action

  • faster understanding of what changed and why

  • clearer prioritization across sites

  • more confident conversations with leadership and customers

  • higher portfolio ROI even if there’s lower on more projects–thanks prioritization!

  • teams that trust the system–and know when to challenge it

AI doesn’t eliminate complexity. It helps teams manage it better.

And when it’s grounded in real jobs-to-be-done, supported by context, and designed with people in mind, progress doesn’t feel like hype. It feels like relief.

Take that first step. We’re past the new year resolutions stage.

Learn More Today

Learn More Today

© LCOE.ai, Inc. 2025

© LCOE.ai, Inc. 2025

© LCOE.ai, Inc. 2025