AI agents vs RPA vs traditional automation.

Three different tools, three different jobs, three different cost curves. Picking the wrong one is how teams end up with brittle scripts that break every month and AI demos that never reach production. Here's the operator framework for choosing the right approach per workflow.

The three lanes

For any workflow you're considering automating, there are three categories of solution and they don't overlap as much as the marketing makes you think.

Traditional deterministic automation

Code that runs against APIs, databases, queues, or event streams. Inputs map to outputs through explicit logic that you wrote. Cron jobs, ETL pipelines, webhook handlers, data validation rules, scheduled reports. Reliable, auditable, cheap to run, expensive only when the underlying system changes. This is software engineering's bread and butter.

RPA (robotic process automation)

Software that drives existing user interfaces, clicks buttons, types into fields, reads screens, to automate processes that happen across systems without proper APIs. UiPath, Automation Anywhere, Blue Prism. Useful when the underlying systems can't be integrated any other way (legacy ERP, third-party portals, no API access). Brittle when those UIs change, which they do constantly.

AI agents

Software that uses a language model as a reasoning layer to decide what to do next within a defined toolset. Reads inputs, plans steps, calls tools, evaluates results, retries or escalates. Capable of handling unstructured data, ambiguous inputs, and decisions that traditional rules can't enumerate. Non-deterministic by nature; requires evaluation, monitoring, and human-in-the-loop on consequential actions.

The decision framework

Three questions, in this order, decide which lane fits.

1. Is the input structured or unstructured?

If your input is well-formed data, a JSON payload, a database row, a structured CSV, and the rules to handle it can be written down in advance, deterministic automation is the right tool. It's faster, cheaper, more reliable, and easier to debug.

If your input is unstructured, free-text emails, scanned invoices, customer chat, varied document formats, traditional rules will not cover the long tail of variation. This is where you need either an LLM-driven step (often called "intelligent document processing" or "intelligent triage") or a full agent.

2. Are the steps fixed or variable?

If the steps are always the same regardless of input, write deterministic code. A daily report that always pulls the same metrics from the same systems doesn't need an agent.

If the steps vary based on what the input contains, sometimes you need to look up a customer, sometimes you need to escalate, sometimes you need to fetch supporting data from a different system, that's where agents earn their keep. The reasoning step decides which tool to use given the situation.

3. Do you have API access to the systems involved?

If yes, write code (or build agents that call those APIs). If no, RPA may be the only option until the systems get APIs or get replaced.

RPA is rarely the long-term answer. It's a bridge while you negotiate API access, replace a legacy system, or wait for a vendor to ship integrations. Treat it accordingly.

Cost shape

The cost curves are different and this matters for ROI.

  • Deterministic automation: high upfront engineering cost (someone has to write the code), near-zero marginal cost per execution. Scales beautifully. Breaks when an upstream system changes shape; the maintenance burden is in keeping integrations current.
  • RPA: moderate upfront cost (often configured rather than coded), brittle middle phase (UI changes break automations regularly), high cumulative maintenance cost. Vendor licensing adds an ongoing fee. The "low-code" promise is real until the systems beneath the RPA layer change.
  • AI agents: moderate upfront cost (model selection, prompt design, tool integration, evaluation harness), per-execution cost (inference + tools), higher than deterministic on a per-run basis but vastly lower than human labour. Cost scales with usage rather than complexity, which is unusual for automation.

What agents are genuinely better at

The interesting case for AI agents isn't "anything you used to RPA, do with an agent now". It's the workflows that previously required a person because the rules wouldn't compress into code:

  • Inbound triage and classification across messy channels (email, support tickets, contract documents, product enquiries). The variability of natural language is exactly what traditional rules struggle with.
  • Cross-system reasoning where a person currently has to look across CRM, finance, and ops systems to compose a response or decision. An agent with read access across the stack does this in one step.
  • Long-tail handling on tasks that are mostly automatable but have a 10% edge-case tail that previously kept a human in the loop. An agent can handle the predictable 90% deterministically and route the rest to a person, with full audit.
  • Document understanding at the layer where structure, context, and semantics interact: contract review, invoice reconciliation, compliance evidence assembly, technical document QA.

What agents are not the right tool for

Equally important: where a deterministic approach is unambiguously better, use it.

  • High-frequency, low-variance tasks. If you process a million identical events a day, write code. The per-execution cost of an LLM, even at 2026 prices, is wasted on a deterministic transformation.
  • Financial calculations with audit requirements. Anything that touches the general ledger should be deterministic. Auditors do not enjoy probabilistic reasoning.
  • Real-time control loops. Latency matters and LLM inference is slow relative to a switch statement. A trading system or a manufacturing control loop should not have a model in its hot path.
  • Anything where a wrong answer is unacceptable and the inputs are well-defined. Validate, write code, ship.

The hybrid pattern that wins

In practice, most production systems we ship combine all three:

  • An agent handles the unstructured front door, reading inputs, classifying them, routing them.
  • Deterministic code handles the structured operations that follow, database writes, API calls, calculations, notifications.
  • An RPA layer wraps any legacy system that doesn't have an API, with a deliberate plan to retire that bridge as APIs become available.

The agent is the reasoning layer, the deterministic code is the execution layer, and RPA is the patch where infrastructure hasn't caught up yet. This is what "AI-powered automation" actually looks like when it works in production.

The takeaway

Don't pick a category and force every workflow through it. Map each candidate workflow against the three questions: structured or unstructured input, fixed or variable steps, API access or not. The right answer falls out, and most of your real systems will use multiple lanes by the end. The mistake is treating "AI agents" as a marketing label for everything we used to call automation. They're a specific tool for a specific job. Used in their lane, they're transformative; used in the wrong lane, they're slower and more expensive than the thing they replaced.

Reach out