Inside Anjin #07: What Happens When Agents Talk to Each Other?

What happens when one agent’s work becomes another’s input? In this post, we explore the challenges, opportunities, and design considerations behind agent chaining - how we’re beginning to connect modular agents together to do more complex, context-aware work.
Feature image for Inside Anjin #07: What Happens When Agents Talk to Each Other? highlighting key themes such as agent chaining, AI orchestration, LLM coordination.
Smart agents are useful. Smart agents that collaborate? That’s where it gets interesting.

As we continue to develop Anjin, one of the most exciting challenges is figuring out how to move from isolated, specialist agents to cooperative workflows - where one agent’s output can seamlessly inform the next.

Agent chaining isn't just a technical question.
It’s a design philosophy. It asks:

  • How do we maintain clarity as context moves across agents?
  • How do we preserve security and reliability at every step?
  • How do we avoid creating a “black box” where users lose understanding?

We’ve started working through those questions.

From Modularity to Orchestration

In Inside Anjin #06, we shared why we treat every agent as a discrete, independently defined unit.

That mindset hasn’t changed.

But now we’re exploring how to chain these agents together - safely, intentionally, and in a way that preserves the benefits of modular design.

We're starting to see real-world examples where this matters:

  • One agent pulls structured data from an external tool
  • Another analyses that data for patterns or gaps
  • A third reframes the findings into user-friendly outputs

Each of these steps is handled by a separate agent. Each has a specific role. And now, they’re beginning to talk to each other.

What’s Technically Involved

Behind the scenes, this means:

  • Sequenced agent execution with prompt formatters and conditional logic
  • Shared scoped memory between agents (but never persistent without approval)
  • System prompt versioning to track behaviour over time
  • Custom tools integrated into agents at the task level, not globally
  • Approval checkpoints in multi-agent chains (especially when outputs might trigger publishing or user actions)

All of this happens server-side - built on our existing Supabase + Edge Functions infrastructure, with clear boundaries for each execution node.

We’re not just firing off prompts.
We’re designing mini-systems, each with its own logic, state, and responsibility.

Why This Is Hard (and Why It’s Worth It)

Agent-to-agent communication isn’t just about passing text. It’s about preserving intent.

Some of the biggest challenges include:

  • Context degradation: How much information should be passed forward? How do you keep it relevant but not noisy?
  • Output format reliability: One agent’s structured JSON might be another’s mess.
  • Debugging: Tracing a multi-agent flow for an unexpected result requires step-level visibility.
  • Responsibility: If an outcome goes wrong, which agent failed - and how should it have handled it?

We’re working through each of these - carefully.
This isn’t about launching a flashy “multi-agent” feature. It’s about building a system that holds up when users rely on it.

What Users Will Actually See

For most users, the experience is simple:
You start a flow, and it just works.

But under the surface, what’s happening is powerful:

  • A research agent pulls competitor data
  • A keyword agent extracts high-opportunity terms
  • A third agent analyses SERP fit
  • The result is an insight you can actually use - not just a wall of text

That’s the value of chaining: domain-specific logic, broken into trusted parts, working together to deliver a result.

Looking Ahead

We’re still early in this space. But here’s what we’re thinking about next:

  • User-defined agent chains: Custom workflows where users assemble the logic themselves
  • Visual debugging tools: So you can inspect what happened at every step
  • Dynamic conditional branches: “If agent B returns X, route to agent C vs D”
  • Scoped persistent memory: Long-term chaining without sacrificing modular clarity

And of course: observability. We want users (and admins) to understand what each agent did, how long it took, and what it produced - without needing to inspect raw logs.

Final Thought: Coordination Without Confusion

The future of agents isn’t just smarter prompts.
It’s smarter systems - ones that let focused agents work together, without becoming indistinguishable or opaque.

At Anjin, we’re not trying to build one “super-agent”.
We’re building a modular ecosystem where agents can collaborate with clarity, precision, and purpose.

We’re early - but we’re on the right track.

Curious how this could apply to your use case?
Join the community and let us know what kind of agent chains you’d want to see. Or catch up on the series:

Continue reading