Inside Anjin #28: Chaining AI Agents — From Task to Flow in the Real World

A single agent solves a task. A chain starts to solve a process. That's the one-line version of what we've been shipping at Anjin for the last year, and it's the shift that quietly turned 2026 into the year multi-agent systems stopped being a demo and started being infrastructure. This post is an update from inside the build: what chaining actually looks like when real users wire it up, what we've learned about where it breaks, and how the protocol stack that emerged around it — A2A, MCP, AgentCore — is changing what a 'workflow' even means.
See how users are chaining multiple agents into full workflows — and how Anjin supports advanced automation across business tasks.

What we mean by “chaining”

Chaining is the point at which you stop using AI agents as one-shot tools and start using them as steps in a pipeline. The output of one agent becomes the input of the next. No copy-paste. No re-prompting. No human ferrying context between tabs.

In Anjin that looks like this: you give the system a single input — a LinkedIn URL, a webinar transcript, a product pitch — and a chain of agents hands work to each other until you get a final artefact: a cold-open email, a blog series outline, a launch tweet ready to schedule. The user writes one prompt; the system produces the output of what used to be four roles.

The industry has converged on the same idea under different names. LangGraph calls them graphs. CrewAI calls them crews. AWS calls them agent runtimes. Strands Agents and the OpenAI Agents SDK describe them as orchestrated tool calls. The shared assumption is that the useful unit of AI is no longer the model call — it's the composition.

How 2026 changed the chaining stack

When we first wrote about chaining in September 2025, stitching two agents together still meant custom glue code. Eight months later, the ground has moved.

A2A (Agent-to-Agent) hit v1.0 and went open. The protocol — now hosted by the Linux Foundation — passed 150 supporting organisations in its first year, with production deployments across Google Cloud, Microsoft and AWS. A2A v1.0 shipped multi-protocol support, enterprise multi-tenancy, and modern security flows. It is fast becoming the HTTP of agent-to-agent communication: a shared way for agents built in different frameworks to share context and reasoning without bespoke adapters.

MCP settled into the role of “the tools layer.” The Model Context Protocol is now the accepted way for a single agent to talk to its tools and data. The split that emerged is elegant: MCP defines how one agent connects to its environment; A2A defines how agents connect to each other.

AWS Bedrock AgentCore went GA with A2A baked in. Amazon's November 2025 announcement added A2A support to AgentCore Runtime, meaning agents built on LangGraph, Strands, the OpenAI Agents SDK, Google's ADK or Claude Agents SDK can now coordinate inside the same serverless environment. AgentCore is, effectively, a managed chaining runtime with session isolation and authentication handled for you.

Adoption caught up fast. By Q1 2026, 57% of organisations surveyed were running multi-step agent workflows in production — up from a rounding error at the start of last year. LangGraph alone now clocks ~27,000 monthly developer searches, well ahead of the next most-adopted framework.

The takeaway for a builder: chaining in 2026 is no longer a differentiator. It's a starting point. What matters now is what you chain, and how cleanly the user can see and edit the chain as it runs.

What users are doing with it

The most used chains inside Anjin right now aren't the most sophisticated. They're the ones that compress three people's work into one prompt.

  • Startup launch flow: product pitch → investor email → launch tweet. One founder, one prompt, three outputs.
  • Content repurpose flow: webinar transcript → blog ideas → LinkedIn post summary → newsletter draft.
  • Lead research flow: LinkedIn URL → company overview → cold-open email with a specific hook.
  • Competitor monitoring flow: competitor domain → SERP diff → GTM brief highlighting the gap.
  • Inbound triage flow: raw form submission → enriched lead record → first-touch reply draft.

The pattern is consistent. The value isn't “AI did the work” — people figured that out in 2023. The value is “AI did the handoff.” The manual step users keep telling us they're glad to be rid of is the cognitive load of re-contextualising between tools.

Why chaining works

Three things make a chain feel different to a single agent.

  1. It creates a system, not an answer. A chain is a repeatable process. You give it a new input, you get a new output — shaped the same way every time. That's the difference between a prompt and a product.
  2. It collapses coordination. In a traditional marketing team, the cost isn't the writing or the research; it's the handoffs. Chains eliminate the handoffs by making them explicit steps in code rather than implicit steps in Slack.
  3. It makes the work inspectable. Every mid-step output is visible. When something goes wrong — a bad research step, a tone drift — you can see exactly which agent produced it. That's something a single long prompt can't give you.

What we're learning

Running chains at scale has taught us a few unglamorous truths.

Chains fail at the joins, not the nodes. The individual agents are usually fine. Breakages happen at the interface — a field formatted slightly differently, a list returned when an object was expected. A2A's typed message contracts are solving a lot of this at the protocol layer; our own internal learning was to add schema validation between every step before we had A2A to lean on.

Longer isn't better. Chains of 8+ steps are almost always worse than chains of 3–4 steps with better prompts at each stage. Every additional agent is another place the intent can drift.

Human checkpoints matter. The chains users trust most are the ones with a single visible “approve” moment before the final step ships. That's not a limitation of AI — it's a feature of accountability. The teams that adopted chaining fastest are the ones that got comfortable letting the agent do five steps in a row and got comfortable reviewing the final step before anything left the building.

Model choice per node is underrated. A production pattern we've converged on (and LangChain's own 2026 guidance agrees) is model tiering: cheap fast models for routing, classification and triage; capable models for the reasoning and the final write. A chain of five Sonnet-tier calls costs more and runs slower than a chain of four Haiku calls plus one Opus call, for worse output.

What's coming next

Three things we're shipping now that A2A, MCP and AgentCore have made table-stakes:

  • Editable mid-step previews. See what agent #2 produced before agent #3 uses it, and edit in-place if you don't like it.
  • Optional logic branches. Conditional steps — “if the lead is enterprise, run the enrichment chain; if SMB, skip it.”
  • Creator-built chains. Users publishing their own chains as templates others can clone. The community effect once this lands is, we think, the most underrated dynamic of 2026 AI.

What this means for marketers

Here is the honest framing. The debate about whether AI will replace marketing work is over. It didn't replace it. It replaced how the work is coordinated.

A marketing team in 2023 needed a writer, an editor, a strategist, an SEO lead, a designer, a paid media buyer and someone to stitch them together in Asana. A marketing team in 2026 needs one operator and a chain of agents that know the brand well enough to hand work between themselves.

The companies winning right now aren't the ones with the biggest AI budgets. They're the ones whose marketing operations are composable — where a new campaign is a new chain, not a new hire.

That's the category we built Anjin to own.

Anjin: the Marketing Operating System behind every chained flow

Anjin is the Marketing Operating System behind every chained flow. Most AI tools give you a better writer, a better researcher, a better image generator. Anjin gives you the operating system that runs them together.

Anjin is a Marketing Operating System — a single platform where content generation, campaign planning, channel distribution, SEO, performance tracking and brand governance all run as chained agents inside one environment. You don't glue tools together. You pick the flow, hit go, and review the output.

What Anjin replaces:

  • The content agency (drafts, revises, publishes across channels)
  • The SEO consultant (optimises and tracks continuously)
  • The paid media planner (briefs, tests, reports)
  • The coordination layer (the Notion pages, Slack threads and spreadsheets holding everything together)
  • The £8–15k/month you're paying to make all of that move

What chaining gives us that generic tools can't: when a news moment breaks or a product launches, the chain runs end-to-end the same afternoon — not the same quarter. That is the speed brands now have to match.

The £888 Lifetime License — Offer Closing Soon

Lifetime access to Anjin for a one-time payment of £888. Not a subscription. Not a seat. Not a trial. One payment, unlimited use, for as long as Anjin exists.

The average marketing team spends £888 in about three working days on tooling, freelancers and coordination software. You're buying the platform that replaces most of it — once.

This price will not be offered again once we close our early-access cohort.

Claim your £888 Anjin lifetime license →

Founders, agency owners and in-house marketers — this is how you run marketing at AI speed without the team, the burn, or another year of waiting.

Sources: AWS Bedrock AgentCore announcement, Linux Foundation A2A one-year milestone, InfoQ Bedrock AgentCore + A2A, LangChain / LangGraph workflows and agents, Fungies.io 2026 AI Agent Orchestration guide.

Continue reading