Beyond Glue Code: Why 'Agentic Primitives' Are the New Commodity in AI Development

The AI development landscape is shifting beneath our feet. Just a few months ago, building an AI agent meant spending weeks stitching together custom code, connectors for every API, hand-rolled logic for conversation state, bespoke tool-calling implementations. Developers called it "glue code," and it was the hidden tax on every AI project.
That era is ending.
The industry is finally recognizing that the real value in AI development isn't in the glue. It's in what you build with it. This realization is driving a fundamental transformation in how we think about AI agents, with profound implications for businesses and developers alike.
What's Actually Under the Hood of an AI Agent
When you strip away the marketing hype and look at actual agent implementations, a surprising truth emerges: AI agents are remarkably simple at their core.
An agent is essentially a loop:
- Receive user input and conversation history
- Send that context to an LLM
- Check if the response includes a request to use a tool
- If yes, execute that tool and return to step 2
- If no, deliver the final answer to the user
That's it. Every sophisticated AI agent you've encountered, from customer service chatbots to coding assistants to autonomous research tools, is built on this fundamental pattern .
At the DataEngBytes 2025 conference, speakers described these systems as "digital squirrels", biased toward action and tool-calling, focused on incrementally achieving goals rather than spending extended time on single thoughts . Squirrels don't philosophize about nuts. They find them, grab them, and move on. The same principle applies to well-designed agents.
The Core Primitives That Power Every Agent
Every agentic system, regardless of complexity, is constructed from a handful of fundamental building blocks. Understanding these primitives is essential for anyone building or commissioning AI solutions.
The LLM Call
This is the foundation. A plain text-in, text-out call to a language model. No memory, no tools, no special handling. Just "here's a prompt, give me a response."
The OpenAI Responses API and its equivalents represent the minimum viable agent brain . No matter how sophisticated an agent becomes, every interaction ultimately reduces to this basic pattern.
The Conversation Loop
Chat functionality isn't magical. It's a for-loop that manages message history.
LLM APIs are stateless by default. They don't remember previous turns unless you explicitly pass that context. Every "intelligent conversation" is really just code managing arrays of messages, appending new turns, retrieving responses, and pushing results back into history .
Many developers experience a moment of mild disappointment when they first implement this loop. The technology that promises to transform everything turns out to be a while loop with some array operations.
But that simplicity is actually the point. The power isn't in the loop itself, it's in what the loop enables.
Tool Calling
This is what transforms a language model into an actual agent. Without tools, you have an expensive text generator. With tools, you have something that can act in the world.
Tool calling follows a straightforward pattern: the model requests a function; your application executes it and returns the output . The model says "I need current stock prices," your code calls a financial API, and the results feed back into the conversation.
Modern APIs like the Responses API are "agentic by default" because they support multiple tool calls within a single request . This enables agents to chain actions, check inventory, place an order, send confirmation, without multiple round trips.
Memory and State
Every agent eventually hits the context window limit. Conversations grow long. Costs increase. Models start forgetting information from earlier turns.
The solutions are standardized but critical:
- Compaction: Condensing conversations to essential information
- Long-term memory: Storing context in vector databases for retrieval when needed
- Session boundaries: Starting fresh sessions while preserving critical context
Memory management is the least glamorous aspect of agent development, and it's where many projects stumble. Brilliant agent designs often fail because they can't remember what happened five minutes ago.
Why Glue Code Has Been the Industry's Hidden Tax
The uncomfortable truth about current AI development is that most of it is glue code. And glue code is problematic for several reasons.
The team at Hugging Face recognized this challenge when they built smolagents. Rather than fighting the problem, they embraced an honest reality: glue code is acceptable as long as it's temporary . Their framework takes an unconventional approach, bypassing complex tool-calling infrastructure entirely by giving agents a single tool: writing Python code.
Consider the implications. Instead of building custom connectors for dozens of APIs, agents can simply generate Python code to call those APIs. Instead of hand-crafting tool definitions for every function, they write functions dynamically.
This approach succeeds because it recognizes a fundamental truth: the interface to tools is becoming a commodity. What matters is what the tools do, not how you call them.
The Standardization That Changes Everything
In 2025, something significant happened that most developers missed. The Model Context Protocol, MCP, emerged as a serious contender for what some are calling the "HTTP of AI."
MCP aims to do for agents what HTTP did for the web: provide a shared contract for discovering tools, fetching context, and coordinating multi-step workflows . Rather than every agent requiring custom connectors for GitHub, Slack, Salesforce, or internal APIs, they can all speak MCP.
The primitives are elegantly simple:
Tools: Typed functions any client can discover and call
- Resources: Addressable context items, files, tables, documents
- Prompts: Named templates for common workflows
- Sampling: Delegating model calls when needed
The parallels to web architecture are intentional. Resources function like URLs. Tools correspond to HTTP methods. Negotiation mirrors headers and content types .
Adoption is accelerating rapidly. Major platforms including Claude Desktop, VS Code and Copilot, Cursor, and JetBrains are integrating MCP . A single connector can now serve many clients, dramatically reducing development overhead.
What This Means for How Organizations Build
When every AI project required reinventing basic infrastructure, development was slow, expensive, and inconsistent. Conversation loops written from scratch. Tool handling implemented custom for each use case. Memory management reinvented with every new agent.
That's changing.
The primitives are becoming commodities. Teams no longer need to build their own tool-calling infrastructure. Conversation state management is becoming a library, not a project. Memory systems are evolving into services.
What does this leave for organizations building with AI?
Value Shifts Up the Stack When infrastructure becomes standardized, value migrates to what's built on top.
The evolution of web development offers a useful parallel. In the 1990s, every website needed custom code for sessions, database connections, and template rendering. Today, those are frameworks. The value is in application logic, not infrastructure.
The same transformation is happening with AI agents. The loop, the tools, the memory, these are becoming solved problems. The strategic work now focuses on:
- Specialized agents tuned for specific domains and industries
- Multi-agent orchestrators coordinating complex workflows
- Industry-specific tools that deliver actual business value
- User experience that makes agentic systems feel natural and trustworthy
The Emerging Governance Challenge
As agents become ubiquitous, organizations face a scale problem most aren't prepared for.
Imagine thousands of agentic processes running across your environment. Each generating utility functions dynamically. Each calling internal and external APIs. Each requiring credentials, rate limits, monitoring, and audit trails .
Who reviews those chat logs? Who audits what agents actually did? Who notices when an agent starts behaving unexpectedly?
The observability requirements here are unprecedented. They exceed anything current API management strategies can handle .
This is why standardization isn't merely about developer convenience. It's fundamentally about governance. When every agent speaks the same protocols, you can monitor them consistently. When every tool uses the same interface, you can secure them uniformly.
Building Without Frameworks (And Why Understanding Primitives Matters) Production systems benefit from well-designed frameworks. But understanding the primitives at a deep level makes for better builders.
The exercise of constructing agents without frameworks reveals something important. When you strip away abstractions, you see the simple core:
- A prompt chaining architecture where each step passes output to the next
- Quality gates that validate outputs before proceeding
- Sequential workflows that build complexity from simple components
Consider a practical example: a product marketing pipeline. A summary agent processes source material. A features agent extracts key capabilities. A marketing copy agent generates compelling descriptions. A quality check runs after the summary to ensure sufficient detail before proceeding .
This is simply a for-loop with conditional logic. But it's also a complete agentic system. No frameworks required. No orchestration engines. Just primitives, wired together with discipline.
Organizations sometimes assume they need multi-million dollar agent platforms. Often, a few hundred lines of well-architected code handling the core 80% of use cases, with clear visibility into where additional sophistication would add value, proves more effective.
The Agentic Stack: A Mental Model for Architects
Understanding agent architecture requires thinking in layers. Each layer builds on those below, and crucially, each is becoming more standardized.
Layer 1: The Model
The raw reasoning engine. Text in, text out. It generates tokens but doesn't act independently .
Layer 2: The Loop
The while loop transforming single turns into conversations. This layer manages history, passes context, and creates continuity .
Layer 3: Tools
The functions models can call. This is where agents escape the text box and act in the world .
Layer 4: Memory
The persistence layer enabling cross-session recall. Vector stores, key-value databases, compacted histories, whatever maintains context .
Layer 5: Orchestration
The coordination layer enabling multi-agent workflows. Managers delegating to specialists. Hierarchies of responsibility. Workflows spanning multiple agents .
Each layer builds on the foundation below. And each is commoditizing. Organizations no longer need to build custom memory systems or invent proprietary tool-calling protocols.
Evaluating Agent Infrastructure
When assessing tools and platforms for production use, the focus should be on primitives, not features.
Key questions include:
Does this platform provide clean access to core building blocks, or does it lock users into proprietary abstractions?
- Can models be swapped without rewriting everything?
- Does the tool-calling surface follow emerging standards?
- How does memory work, and can organizations bring their own storage?
The platforms gaining traction are those that embrace commoditization. They deliver excellent implementations of the primitives while avoiding vendor lock-in. Agents built on them can migrate elsewhere if requirements change.
The platforms struggling are those attempting to own the entire stack, forcing users into specific models of how agents should work. These are the walled gardens of the agentic era.
The Career Implications for Development Teams
Many developers express anxiety about AI's impact on their roles. Will it replace their jobs? Make their skills obsolete?
The most constructive response is to become AI producers, not just consumers . Learning to build with these primitives, understanding how agents work under the hood, represents valuable professional development .
The skills are transferable across domains. The same loop that powers a code-editing agent can automate data pipelines, CI/CD workflows, or database management . Once the pattern is understood, applications multiply.
Understanding the primitives increases value rather than diminishing it. Anyone can prompt ChatGPT. Few can architect reliable, production-grade agentic systems. The gap between these capabilities is where meaningful engineering careers are built.
The Future: From Glue to Substance
The trajectory is clear. Glue code is disappearing. Agents will speak standardized protocols. Tools will be discoverable through common interfaces. Memory will be a pluggable service rather than custom infrastructure.
What remains when the glue vanishes?
The substance.
Unique workflows. Industry expertise. Understanding of which problems actually need solving. Models and protocols are becoming commodities. The value lies in their application.
As Google noted in their Gemini 3 announcement: "Developers are moving beyond simple notebooks to build complex, production-ready agentic workflows" . The future isn't about whether an organization can build an agent. It's about whether they can build one that reliably delivers value in production.
That's a substantially harder problem. It's also where the real opportunity lies.
Where Organizations Should Focus
For teams beginning this journey, a structured approach yields the best results.
Build the loop first. Implement conversation state management from scratch. Experience the pain of context windows and memory limits. Understanding what abstractions are abstracting makes their value clear.
Then leverage the abstractions. After building agents manually, adopt the tools. Modern primitives solve problems teams have already struggled with. Use them.
Learn the protocols. MCP is becoming standard. Understanding how it works, what it enables, and its limitations will pay dividends for years.
Focus on the hard part. The easy components, the loop, the tools, the memory, are commoditizing. The challenging work, determining what agents should actually do, and ensuring they do it reliably, is where differentiation happens.
Conclusion
AI agents aren't magic. They're a simple loop with a handful of primitives wired together.
The glue code that has consumed development effort is becoming a commodity. Standards are emerging. Protocols are solidifying. Infrastructure is maturing.
What matters now isn't whether an organization can build an agent. It's whether they can build one that actually solves problems, reliably, scalably, and with sufficient intelligence to deliver real value.
The vending machine for intelligence is here. The question isn't whether to use it. It's what to build with it.
Organizations that understand the primitives, embrace the standards, and focus on substance over glue will define the next era of AI development. Those that remain mired in custom infrastructure will find themselves competing on increasingly irrelevant dimensions.
The future belongs to builders who see beyond the glue.
