We think conversational AI agents will deliver all professional services in India. We started with astrology. We're a small group of engineers, designers, and product folks building at the intersection of conversational AI and domain expertise. Making an AI agent sound human-like is hard. Making an AI an expert in a domain is also hard. We're doing both together.
We're backed by Accel, Arkam Ventures, and Weekend Fund.
Vaya is our consumer astrology product - an AI agent that conducts real astrology consultations. Not horoscope generators. Not "what's your zodiac sign" apps. Deep, personalized Vedic astrology - birth charts, dasha analysis, transit readings, compatibility, muhurat timing - delivered through conversation.
The hard question isn't "can an LLM talk about astrology?" It can. The hard question is: is it actually good? Good by the standards of someone who's consulted pandits their entire life. Good enough that a first-time user trusts it with a real question about their career or marriage.
That's your job. You'll work with engineering to define what quality means for our agent - building evaluation rubrics, finding the right metrics, talking to users constantly, and closing the gap between what the agent delivers and what users actually need. You're the person who understands both the domain and the users deeply enough to tell us where we're falling short.
Talk to users relentlessly - understand their motivations, frustrations, and the gap between what they expect from astrology and what our agent delivers
Build evaluation rubrics for the agent's responses - what makes a good birth chart reading vs. a shallow one, what makes compatibility analysis trustworthy vs. generic
Work with engineering to define and track the metrics that matter - not vanity metrics, the ones that tell you if the agent is actually getting better
Develop a deep understanding of how LLMs work and where they fail - so you can identify whether a bad response is a prompt problem, a context problem, or a model limitation
Shape how the agent converses - what it says, how it says it, when it pushes deeper vs. wraps up, and how it handles topics where astrology gets sensitive
Synthesize user feedback into actionable improvements - turning "this reading felt off" into specific, testable changes
Work across engineering, AI research, and design - you don't own the roadmap alone, but your understanding of users and domain quality shapes what we prioritize
You're deeply curious about domains - willing to go deep into Vedic astrology and develop real understanding, not just surface-level familiarity
You understand LLMs - not necessarily building them, but how they work, what they're good at, where they break, and how prompt and context changes affect output
You're quantitative - you make decisions with data, not opinions. You can design an evaluation framework, not just use one.
You talk to users like it's part of your daily routine, not a quarterly exercise
You can write - rubrics, evaluation criteria, user research synthesis, experiment briefs. Clear thinking shows up in clear writing.
You're comfortable with ambiguity - we're building a new category. There's no playbook.
First-principles thinking - you challenge assumptions, including your own
Experience with AI products - you've worked on products where LLM output quality was the core problem
Background in QA, evaluation, or user research for subjective/open-ended systems
You've worked at an early-stage startup where your role was defined by what the product needed, not a job title
Understanding of the Indian consumer market - cultural context, regional differences, language preferences
We care about craft obsessively. Your work gets questioned, pulled apart, and rebuilt - not because we're harsh, but because everyone here holds each other to a standard most places don't bother with. We work out of a hacker house in Vasant Kunj. We strongly encourage everyone to be in office.
If that sounds like the only way you'd want to work - let's talk.