(Part 1) AI Is Not the Goal. Better Decisions Are.

Jan 7, 2026

As we head into 2026, it’s hard to avoid the pressure to “do something with AI.”

Last year, I heard non-stop permutations of the following:

  • “Our Board is telling us to use AI.”

  • Investors expect a point of view.

  • “We want an agent to do ‘it’.”

  • Teams are setting up AI task forces with, relatively, open mandates.

In many conversations I’m having, the starting point sounds similar: “We need AI,” or “We want an AI agent.” The challenge is that those statements describe a technology, not a problem. And without clarity on the problem, it’s difficult to know whether AI is actually helping–or just adding another layer of complexity.

A more useful way to think about AI is not as a destination, but as a tool to accomplish specific jobs-to-be-done (JTBD, shout out to my MBA professor where I first learned about Clayton Christensen’s framework and how it is a bedrock of product management).

What leaders are really trying to accomplish

When you strip away the buzzwords, most solar org leaders are not asking for AI for its own sake. They are trying to solve very practical problems:

  • Reduce the time teams spend reviewing dashboards and alerts

  • Understand which sites actually need attention ASAP because there are liquidated damage risk or potential performance guarantee infractions

  • Explain underperformance clearly to customers, executives, or Boards

  • Scale portfolios without scaling headcount at the same rate–or even at all

  • Make decisions with more confidence and fewer surprises

These are not “AI goals.” They are operational jobs-to-be-done. AI may help accomplish them, but only if it is applied intentionally. AI, is afterall, a technology. In my previous company, AI was about enabling speed, scale, and accuracy. It’s the same here and now.

The industry isn’t data-poor, it’s decision-overloaded

One of the misconceptions I often hear is that organizations need AI because they lack data. In reality, most asset management teams are surrounded by data–tools, even. 

You have monitoring systems / SCADAs, weather feeds, financial models, contracts, and work order systems… all generate more information than any one person can reasonably process. Or, difficult to manage timely, and quite frankly, without burning out team members after a few months.

The real challenge, then, is not access to data, but decision overload. It’s workflow inundation.

Teams spend hours each week interpreting alerts, reconciling conflicting reports, and debating what matters most. Multiply that against the number of sites, tools, and even adjacent teams asset managers, for example, have to work with. 

As portfolios grow, that cognitive load increases. AI gets introduced as a potential solution–but without a clear understanding of which decisions it is meant to support.

Why “we want an AI agent” is rarely the right starting point

AI agents are compelling because they promise autonomy: systems that monitor, interpret, and act with minimal human involvement. But autonomy is not a job-to-be-done. It’s an outcome that only makes sense once the underlying work is well understood.

When teams jump straight to agents, a few things tend to happen:

  • Outputs feel generic or hard to trust

  • Recommendations require manual verification

  • Operators question whether the system understands real-world constraints

  • Leaders struggle to see clear ROI

This is not because AI is ineffective. It’s because the job has not been clearly defined.

Reframing AI around JTBD

A more productive approach is to start by asking: What work are people doing today that we want to improve?

For example:

  • “Help me prioritize which sites deserve attention this morning.”

  • “Explain why this site is underperforming in plain language.”

  • “Summarize portfolio health for leadership without manual reporting.”

  • “Reduce alert noise so the team can focus on what matters.”

Once the job is clear, AI becomes one possible tool–not the answer by default. In many cases, simply improving visibility, consistency, and context already creates meaningful value before automation is introduced.

Then, you go real next-level and you don’t ask the questions or make the requests. At least, that happens only after an AI agent informs you that it created a workplan to address 3 sites this week because of material revenue and cost risks, and it was able to do this because it also rescheduled an O&M team due to inclement weather on Thursday. 🤯 Yeah, I know what you’re thinking… but let’s start grounded and get there.

What AI is actually good at today

AI excels at synthesis, pattern recognition, summarization, and explanation. It is far less effective when context is missing or when success criteria are ambiguous. This is why AI works best when it supports humans first, rather than attempting to replace them outright.

In practice, the most successful applications of AI in solar today are those that:

  • Reduce time spent interpreting information

  • Summarize what is going on with the portfolio over the last quarter

  • Provide clearer explanations, not just more data

  • Help teams focus attention, not automate decisions prematurely

This is progress—even if it doesn’t look like a fully autonomous agent.

Defining success differently

If AI is framed as the goal, it’s easy to declare success too early–or to become frustrated when expectations aren’t met. If AI is framed as a tool to accomplish specific jobs-to-be-done, success becomes easier to define and easier to measure. It also becomes easier to count the small wins which underpins successful transformations.

Success looks like:

  • Fewer alerts driving more action

  • Faster understanding of what changed and why

  • More confident decisions across teams

  • Less time spent stitching together information

In Part 2, I’ll explore what it actually takes to enable this. Specifically, the foundational work that needs to happen before AI can reliably support these jobs-to-be-done.

Learn More Today

Learn More Today

© LCOE.ai, Inc. 2025

© LCOE.ai, Inc. 2025

© LCOE.ai, Inc. 2025