We think conversational AI agents will deliver all professional services in India. We started with astrology. We're a small group of engineers, designers, and product folks building at the intersection of conversational AI and domain expertise. Making an AI agent sound human-like is hard. Making an AI an expert in a domain is also hard. We're doing both together.
We're backed by Accel, Arkam Ventures, and Weekend Fund.
Most AI agents treat conversations as stateless request-response cycles. Ours doesn't. We've built a stateful agent that streams responses with natural pacing, handles mid-response interruptions without corrupting state, decides autonomously whether to keep talking or wait, and maintains a living context that evolves across turns.
The Agent Orchestration team owns the core runtime - the system that coordinates LLM calls, tool execution, context management, delivery pacing, and conversation persistence. When you break something here, users notice in real-time. When you get it right, the agent feels like talking to a person who genuinely knows what they're doing.
Own and evolve the stateful agent runtime - the streaming pipeline, state management, and tool execution layer
Ensure what the user sees, what the model remembers, and what we persist stay perfectly in sync - even when users interrupt mid-response
Build and refine the streaming architecture - how responses are buffered, delivered, and recovered cleanly on interruption
Implement and tune delivery pacing - making the agent's output feel natural, not robotic
Design the tool execution layer - the agent calls domain-specific tools mid-conversation and reasons over results in real-time
Work on autonomous continuation - the agent deciding when to keep going and when to stop, based on conversational context
Optimize for latency and reliability - first-token time, interruption recovery, context window management
Deep systems programming instincts - you think about state machines, concurrency, race conditions, and failure modes naturally
Experience building real-time or streaming systems - WebSockets, SSE, or similar
You've built and maintained stateful, long-running processes in production
Comfort with distributed systems - message queues, event-driven architectures, service coordination
Strong debugging skills for systems where "it worked on my machine" is never the problem
You can read and reason about concurrent code - whether it's Elixir processes, Go goroutines, or async runtimes
Experience with Elixir/OTP - GenServers, supervisors, the actor model
Familiarity with LLM APIs and streaming protocols (SSE, chunked responses)
You've built or worked on conversational AI systems (not just chatbot wrappers)
Experience with NATS, Kafka, or other messaging systems
You've read our whitepaper on Stateful Agent Orchestration and can poke holes in it
We care about craft obsessively. Your work gets questioned, pulled apart, and rebuilt - not because we're harsh, but because everyone here holds each other to a standard most places don't bother with. We work out of a hacker house in Vasant Kunj. We strongly encourage everyone to be in office.
If that sounds like the only way you'd want to work - let's talk.