We build business applications. A big part of that is automating processes that people shouldn't be doing by hand. ERP workflows, integrations, bots. These tools do what they were built to do: execute predefined rules, move structured data from A to B, run the same process the same way every time. They're good at it.
What's changing is the kind of work that can now be automated. AI agents are software systems that can reason, adapt, and act with minimal human direction. They're starting to handle tasks that used to require judgment or the ability to deal with exceptions. That matters. But it doesn't mean you should automate everything, and it doesn't replace the need to understand your processes first.
The Old Model: Automate the Steps
Traditional automation follows a simple logic: define the steps, the triggers, the rules. The system executes them. Invoice matches the purchase order? Approve it. Support ticket says “refund”? Route it to billing. Whether it's an ERP workflow, a Zapier integration, or an RPA bot, the pattern is the same.
This works for high-volume, repetitive, structured work. But it breaks the moment something unexpected happens. A new invoice format, an ambiguous customer request, a supplier email that doesn't match the template. Traditional bots don't handle ambiguity. They follow instructions. When the instructions don't cover the situation, the process stalls. Someone has to step in.
We see this a lot. The rule-based work gets automated. The messy, judgment-heavy work stays manual. And the gap between the two grows wider as the business gets more complex.
The New Model: Define the Outcome
AI agents work differently. Instead of scripting every step, you describe what you want done. The agent figures out how to get there: reading unstructured data, making decisions based on context, pulling in other tools as needed.
Take invoice processing. A traditional bot reads a structured PDF, matches fields to a template, routes the result. An AI agent can read an invoice in any format, even a photo of a handwritten one, pull out the relevant information, cross-reference it against your purchase orders, flag discrepancies, and draft a follow-up to the supplier. Format changes next month? The agent adjusts. No one has to reprogram anything.
Microsoft frames this as a move from “systems of record” to “systems of action”, where agents don't just store data but interpret signals and initiate actions. That opens up a lot of business processes that were previously too messy or too reliant on human judgment to be worth scripting.
What's Working Right Now
Some of the numbers are already hard to ignore. Klarna's AI customer service agent handles the equivalent workload of 853 full-time agents, saving them an estimated $60 million a year. One financial institution cut report generation from 15 days to 35 minutes using a multi-agent system.
In procurement, Siemens deployed AI agents that reduced cycle times by 60% and generated 11% cost savings across 15,000+ suppliers. JPMorgan reports saving 360,000 legal hours annually through automated document review.
What ties these together: they all involve reading unstructured information, applying judgment, and acting on it. Work that people used to say you couldn't automate.
When It's Worth It, and When It Isn't
Not every process needs an AI agent. Put one in the wrong place and you'll create more problems than you solve. Gartner predicts that over 40% of agentic AI projects will be cancelled by the end of 2027, mostly because of runaway costs, unclear ROI, or governance that wasn't thought through. So where do they actually make sense?
AI agents make sense when:
- The work involves unstructured or variable inputs (emails, documents in different formats, natural language requests) that traditional automation can't handle reliably.
- Decisions require context, not just rule matching. “Is this expense reasonable?” is a different question to “Is this field populated?”
- Exceptions are the norm. If your team spends most of its time on cases that fall outside the standard workflow, an agent can absorb that work.
- The process is high-value but labour-intensive. Contract review, financial reconciliation, supplier evaluation. The cost of doing it manually is real.
Traditional automation is still better when:
- The process is well-defined, rule-based, and rarely changes. A bot or workflow engine is faster to deploy and easier to maintain.
- You need deterministic, auditable outcomes. If every decision has to trace back to a specific rule, scripted automation is more transparent.
- The data is already structured and clean. If inputs and outputs are standardised, adding AI reasoning is overhead you don't need.
In practice, you usually end up combining both. Traditional automation handles the predictable core. AI agents pick up the exceptions and judgment calls around the edges. As SS&C Blue Prism puts it: traditional automation “is more valuable than ever” as the foundation on which AI agents stand. The bot does the 70% that's straightforward. The agent picks up the 30% that used to land on someone's desk. The person who used to do that work can focus on the problems that actually need a human.
The Part Most People Skip
AI agents are flexible enough to work around messy workflows, unclear responsibilities, inconsistent data. That sounds like a feature until you realise they're just papering over problems. The process looks automated. The underlying mess is still there.
Deloitte's research backs this up: nearly half of organisations cite data searchability (48%) and data reusability (47%) as primary challenges to their AI automation strategy. The problem is almost never the model. It's the data and processes underneath it.
Before you deploy an agent, map the process. Who does what, where the handoffs are, what decisions get made and why, where things get stuck. It's not exciting work. But it's the difference between automation that holds up and automation that just adds complexity on top of something that was already broken.
Getting Started
If you're thinking about trying this, don't start with your most critical process. Pick one where your team spends a lot of time on exceptions or unstructured information. Something where you can learn quickly and the stakes aren't too high if it goes wrong.
Map the process before you build anything. The steps, the decision points, the data sources, the places where things break. Keep a human in the loop, at least at first. Let the agent handle preparation and routine actions, but keep people on the hook for high-stakes decisions. You expand autonomy as you build confidence. Measure from the start too. Time saved, error rates, throughput, whatever matters for that process. Without clear metrics you won't know if it's working or if you're just spending money on something that feels modern.
The organisations getting the most out of AI agents right now aren't the ones with the fanciest technology. They're the ones that understand their processes well enough to know where an agent will actually help.




