No APIs, No AI: Why API Access Is Critical to Agentic Systems

The latest large language models (LLMs) still get most of the buzz, but for those focused on the real-world potential of AI, the excitement over the past year has shifted toward the promise of intelligent apps powered by LLMs, otherwise known as AI agents.
Once people grasp the concept of agentic AI, the notion of apps that can independently learn, make decisions, and take action captures the imagination. The possibilities for AI-powered personal assistants, coding assistants and vertical apps, ranging from healthcare screeners to hospitality agents, are endless.
AI agents will soon automate many aspects of our lives. The automation they can deliver falls into three categories:
Cognitive automation emulates human thinking, such as the generative modeling behind AI chatbots that synthesize original text and images.
System automation not only replaces human system management but also eliminates static, predefined responses, enabling systems to self-diagnose and self-heal when they fail.
Physical automation will enable robotic systems to navigate the real world, but the development of foundational models for that purpose is still in progress.
For AI agents of any type to be useful, they need to interact with other apps, to augment the underlying LLM by fetching additional data, or to tell other apps to perform some action. That interaction may be with another AI agent or with a system that lacks any embedded intelligence at all.
In other words, AI agents can be thought of as intelligent services within distributed systems. And as with nearly all distributed systems, APIs enable interaction among services.
The Vital Role of APIs
The interplay between AI and API is powerful. LLMs are static: Their training requires enormous compute resources and, once trained, they remain unchanged until the LLM provider releases a new version. By contrast, although AI agents run on top of LLMs, they are dynamic, augmenting their pre-trained models with new data via APIs as they complete their tasks.
One way of appreciating the dynamic nature of agentic AI systems is to contrast them with conventional microservices-based applications. With the latter, although each microservice has an API, interactions among services are predetermined by the developer or application architect. With agentic AI, the AI agent itself decides which services or agents to connect with depending on the task at hand.
The AI agent evaluates various factors, identifies patterns based on its training data and determines the most reasonable next step to achieve the best possible outcomes. It then reaches out to API-accessible services to enrich responses or perform various actions to fulfill the user’s request. Already, some service providers — such as Stripe and Amazon — have released APIs tailored for agent-based workflows, while others are leveraging the Model Context Protocol (MCP) to make their APIs accessible to LLMs.
Naturally, an AI agent should only tap trusted APIs. One place to find them is the Postman API Network, which offers the world’s largest catalog of verified public APIs from trusted vendors available to LLMs through facilitated code generation or hosted as MCP. Within an organization, multiple engineering and product teams also collaborate via private APIs. Depending on the nature of the agent and the user prompt, AI agents may tap combinations of both public and private APIs to initiate actions or generate responses.
Managing Agentic AI Risks
Agentic AI raises the stakes for AI outcomes. Today, we often tolerate the occasional hallucination or false information due to the limited training data available from AI chatbots. An error has radically different implications when, say, AI agents are given the responsibility to manage cloud infrastructure or interact with customers in a manner that still sounds human.
That’s why there will be a gradient of agents. Some of them will be trusted to be completely autonomous, while others will require a human verifier in the loop, with an approval step and safeguards in place around the approval process. Developing guardrails to determine what agents can and can’t do should be considered a minimum; however, such guardrails require research and development, and we are still in the experimentation phase.
Rigorous testing of AI agent systems is essential. AI agents can take autonomous actions, so if those actions go awry, they can cause severe damage, particularly if they are compromised or poorly designed. Developers need to test both system and user prompts, benchmark response times, and compare LLMs to choose the best one for a given agentic system. Once selected, the LLMs underlying AI agents must be fine-tuned to deliver the best, most predictable results.
The APIs within an agentic system must be fully protected. To prevent API misuse, robust authentication, authorization and rate limiting must be in place for both AI agent APIs and the APIs they access when calling data or functions offered by conventional systems.
Our Agentic AI Future
NVIDIA CEO Jensen Huang has said that “the IT department of every company is going to be the HR department of AI agents.” The analogy is not as implausible as it seems: Like humans, intelligent agents must observe rules, learn continuously and be provided with resources that enable them to thrive.
Organizations will develop and deploy dozens and dozens of AI agents over time. Depending on their remit, those agents may communicate with each other intensively on complex collaborative tasks, opening up a whole new world of intelligent systems. Multiple agents can run on top of the same LLM, while others may be built on different language models, including smaller models for specialized tasks.
We’re shifting from a paradigm where everything is prescribed and static to one where AI agents are essentially thinking, with a world of API-accessible data and functionality for them to augment their capabilities. Developers will no longer need to write code for every step of the process. And in those shiny new agentic systems developers create, without APIs, there’s no AI.