Inside Anjin #19: Our Agents Run Our Marketing

We didn’t just build agents for other people. We’ve been running our own business with them from the start. This post shares what it looks like to actually use Anjin’s agents to drive marketing — the wins, the flaws, and the things we learned by relying on the product before anyone else.
See how Anjin’s marketing agents power our own growth — real-world proof of performance through internal use and testing at scale.
Before we asked anyone else to trust this platform, we made sure we could.

We built Anjin because we were frustrated with how much time was lost to fragmented, manual, or overly abstracted marketing tools. So once we had agents running, the first thing we did was test them — on ourselves.

No smoke and mirrors. Just everyday use cases that needed solving.

The Real Test Was Us

When you're building the system and using it to run your own outreach, campaigns, and content — you find the edge cases fast.

Here’s what we’ve used agents for in the last few months:

  • Scheduling and preparing AI dinner discussions
  • Refining SEO metadata for article series like this one
  • Rewriting CTAs across different landing pages
  • Structuring interview summaries and lead research
  • Creating community engagement threads and headlines

All of it was run through Anjin. And most of it ran repeatedly as we iterated.

What Went Wrong (and Made Us Better)

Using your own product reveals things you don’t catch in tests.

We ran into:

  • Role bugs when permissions clashed across agents
  • Token handling inconsistencies that broke preview links
  • Output drift when agents weren’t pinned to recent logic updates
  • Confusing UI feedback when workflows stalled but didn’t break visibly

Every time we found one of these issues, we shipped a fix and improved the system.

That feedback loop — from frustration to fix — became one of the strongest assets in our product development.

Agents That Evolve With You

The best agents weren’t the ones we nailed first time. They were the ones we used over and over again.

Some improved through:

  • Memory refinements based on prompt failures
  • Structuring output better for humans, not just machines
  • Adding safe defaults to protect against weak inputs
  • Creating optional chaining for complex campaigns

The biggest takeaway? Agents don’t need to be “perfect” — they need to be useful, adjustable, and easy to test. And ideally, the platform should make that process feel seamless.

Why This Matters for You

If you're a future user of Anjin — whether subscribing to built-in agents or creating your own — this work affects your experience directly.

  • The agents you’ll use have already run real workflows
  • The edge cases have already surfaced and been improved
  • The infrastructure has been shaped by actual day-to-day use
  • The result is something more than a prototype

It’s a product that works, because we couldn’t afford for it not to.

Final Thought: Built For Use, Not Demo

The agents powering our own growth weren’t built for a pitch deck.
They were built to do real work — under pressure, with deadlines, and with real users on the other side.

We still haven’t outsourced our marketing. And we don’t plan to.
We’re running Anjin on Anjin.

That’s how much we believe in it.

Want to see what the platform can do for your own workflow?
Join the community to be first in line when Anjin opens in September.

We’ll be sharing more about the agents we built — and how you can create or customise your own — in the run-up to launch.

Continue reading