Why most Systems are already Agentic

  • Published:
  • Author: Dr. Andreas Kyek
  • Category: Deep Dive
Table of Contents
    Why most systems are already agentic, hero image, Alexander Thamm [at]
    Alexander Thamm [at] 2026

    Recently, I had a discussion with an AI assistant for coding on a clearly Agentic application. The system had built a small workflow involving an LLM call, and I questioned the technological setup: Why was this not built as an Agent? 

    The response I got was: “This is not an Agent. It’s just a pipeline. There’s no decision-making.” 

    At first glance, this sounds reasonable. A pipeline executes a predefined sequence of steps. An Agent, on the other hand, should decide what to do. Case closed. Or so I thought. But this reasoning quickly breaks down in practice. If you look at systems like Microsoft 365 Copilot and its so-called Agents, many of them don’t have explicit branching logic at all. They are not embedded in complex workflows, and they often perform fairly repetitive tasks. And yet, everyone, including Microsoft, calls them Agents. 

    So, what’s going on? In this article, I explore why many of the systems we build and use today are, in fact, agentic. In doing so, I argue that our prevailing definition of an agentic system rests on a fundamental mismatch. 

    Engineering vs Product Perspective 

    The root of the confusion lies in the fact that we are mixing two fundamentally different perspectives. 

    From an engineering point of view, an Agent is something that perceives state, makes decisions, and acts, often in a loop. This definition emphasizes explicit control logic: branching, planning, iteration. 

    From a product perspective, however, an Agent is simply a reusable AI-powered capability that acts on behalf of a user. It doesn’t matter whether the system contains explicit decision trees or not. What matters is that it performs a task autonomously. 

    Both perspectives are valid but they are not the same. And this mismatch is exactly where the confusion starts. A more helpful way to think about this is not in binary terms (“Agent” vs. “not an Agent”), but in terms of levels of agency. 

    Defining Agents by their Level of Agency

    Level 0 — Pipeline 

    • Fully deterministic
    • Fixed sequence of steps
    • No model-based reasoning

    Level 1 — LLM-powered tool 

    • Single LLM call or fixed chain
    • Reasoning happens inside the model
    • No explicit control logic in code

    Level 2 — Reactive agent 

    • Chooses next steps dynamically
    • Selects tools or branches

    Level 3 — Autonomous agent 

    • Plans, iterates, maintains state
    • Multi-step reasoning and execution 

    This spectrum hints at something important: agency is not a switch. It’s a gradient. 

    Now comes the uncomfortable truth. The moment you introduce an LLM, you introduce implicit decision-making. Even a simple extraction prompt is not just “execution.” The model decides what is relevant, how to interpret ambiguous input, and how to structure the output. That is judgment.

    So, if we stick to the strict engineering definition, we end up in a strange situation. Either we ignore this implicit decision-making, or we admit that many systems we casually call “pipelines” are already Agentic. This is exactly the dilemma most teams run into. 

    Why the Presence of an LLM makes a System Agentic 

    At this point, it is worth taking a clear and consistent stance: As soon as an LLM is involved, the system is Agentic.

    This may sound radical at first, but it is the most coherent position. The presence of an LLM means that the system is no longer fully deterministic. It interprets context, makes implicit decisions, and shapes outcomes in ways that are not explicitly encoded in code.

    This is enough to call it Agentic. Interestingly, this aligns very well with how modern products behave, whether they formalize it or not.

    Once you accept this, the real question shifts. It is no longer about labelling systems as Agents or not. Rather, the real question becomes: How do we design systems cleanly, once we accept that they are Agentic?

    A simple and effective answer is to separate control from execution. At the system level, you acknowledge that the application is Agentic. This means you design for it properly,  with logging, guardrails, evaluation, and all the necessary operational considerations.

    At the same time, you do not force every part of the system into an Agentic mold. Instead, you distinguish between two types of building blocks:

    • Agent nodes, which contain LLM calls and perform interpretation, judgment, or reasoning. These are responsible for understanding the task, evaluating information, and shaping the outcome.
    • Tool nodes, which are purely deterministic. They execute well-defined operations such as database queries, API calls, file transformations, or rule-based logic — without any interpretive responsibility.

    This leads to a very clean principle: Not every node must be an Agent, but once an LLM is involved, the system is Agentic.

    This distinction gives you the best of both worlds. You get architectural consistency at the top level, while still preserving efficiency, determinism, and clarity where it matters.

    More importantly, it helps avoid two common failure modes. The first is the “it’s just a pipeline” illusion. By pretending that LLM-based systems are still deterministic, teams underestimate complexity and skip essential guardrails. The second is the opposite extreme: turning everything into an Agent. That leads to unnecessary LLM usage, higher costs, increased latency, and reduced transparency without adding real value. 

    Conclusion 

    The definition debate seems trivial at first, but it helps to understand where control resides. A pipeline executes predefined steps. An Agentic system contains interpretation. And in modern AI systems, interpretation begins the moment an LLM is involved. So, indeed, even the seemingly “simple extraction step” is already part of an Agentic system. The real design question is no longer: “Is this an Agent?” Rather, the question becomes “Where do we allow judgment and where do we enforce determinism?” If you answer this question well, the respective architecture will follow.

    If you want to dive deeper into AI Agents and see what Agentic AI applications look like in practice, meet us at NAICE2on April 15th 2026. The event is characterized by “The real world of Agentic Business” and brings together experts across industries. There is still time to register. Get you ticket on the official NAICE website

    Share this post:

    Author

    Dr. Andreas Kyek

    Andreas is a Senior Principal Data Scientist and has been with [at] since April 2022. He brings over 20 years of experience in semiconductor manufacturing and is an expert in anomaly detection and predictive maintenance. Since the emergence of large language models, he has increasingly focused on agents, data processing through agents, and especially the design of multi-agent systems – both using established libraries and building them from scratch in plain Python.

    X

    Cookie Consent

    This website uses necessary cookies to ensure the operation of the website. An analysis of user behavior by third parties does not take place. Detailed information on the use of cookies can be found in our privacy policy.