We’ve all seen the demos. The tweets. The decks. But building AI people actually return to? That’s a different game. In this piece, we explore why most AI products never move past the hype—and how thoughtful design, problem clarity, and real user context turn novelty into habit.
Not everything we’re working on makes it into production. Some agents are too ambitious, some are half-formed, and some are just waiting for the right use case. This post is a peek into the ideas that are still sitting on the whiteboard - and an open invitation to shape what comes next.
Giving someone access to an AI agent isn’t just about turning something “on.” It’s about deciding who should be able to do what - and making sure they don’t accidentally (or intentionally) do the wrong thing. In this post, we’re unpacking how we think about roles, permissions, and platform governance at Anjin.
Measuring success in an AI product is tricky. Especially when your agents return something every time - even if that something is confidently wrong, weirdly formatted, or “technically correct but completely useless.” In this post, we’re unpacking how we think about observability at Anjin - and why “Did it work?” isn’t a helpful question.
What happens when one agent’s work becomes another’s input? In this post, we explore the challenges, opportunities, and design considerations behind agent chaining - how we’re beginning to connect modular agents together to do more complex, context-aware work.
Anjin is built on modular AI agents - but that decision came with trade-offs. In this post, we’re exploring how we approach agent modularity, what it unlocks for users, and why treating every agent as unique is a deliberate response to the limitations we see across the current AI agent landscape.
Anjin started with a fast, lovable prototype. But the moment we tried to scale it, the cracks showed. This article breaks down the lessons we learned from moving beyond the "Lovable Edition" of our stack and how we rebuilt Anjin for real users, real security, and real deployment.
We’re learning a lot by listening to our agents. This post is about how we’re making sense of those signals - why we brought BigQuery into the loop, and how observability is shaping the way we scale Anjin.
We’ve been spending evenings at the table with AI. Not just metaphorically - but literally. This article explores what happens when you treat AI agents like guests at a dinner party, and what that tells us about creativity, constraint, and how we build.
In this post, we take you behind the scenes of how we’re designing Anjin’s admin dashboard. It’s not just about buttons and panels - it’s about building trust, enforcing rules, and preparing the platform to scale without compromise. Also, small note: it’s Sam’s birthday today. We’re launching this one with cake.
We’re still figuring this part out. In this post, we’re opening up about the questions we’re asking around limits at Anjin - whether it’s credits, domain restrictions, seat limits, or something else entirely - and why we believe designing smart boundaries can actually create more value for users, not less.