Take2 builds AI Interviewers that automate the entire screening process — reviewing resumes, conducting structured phone screens, and scheduling next steps.
Today, our customers are leading healthcare organizations. Every month, we help hospitals and health systems hire faster, reduce recruiting overhead, and fill critical clinical roles more quickly.
When healthcare organizations hire faster, patient care improves. Staffing gaps shrink. Burnout decreases. The ripple effects are real.
We already power thousands of candidate conversations each month. Now we’re scaling to millions — at a time when healthcare workforce infrastructure needs transformation.
Take2 AI is hiring a Forward Deployed Engineer to design, launch, and continuously improve our AI Interviewers for customers. Our AI agents already conduct tens of thousands of structured candidate interviews each month.
This role sits at the intersection of voice/conversational agents, prompt + flow design, evaluation/scoring rubrics, and production iteration. You’ll work directly with customers to understand screening requirements, translate them into structured interviewer behavior, deploy agents into production, and improve performance based on real-world feedback and metrics.
This is a hands-on, highly analytical role for someone who enjoys turning ambiguous requirements into precise agent behavior, building rigorous evaluation approaches, and shipping improvements quickly in a startup environment.
Customer Onboarding & Requirements (Customer-Facing)
Lead technical onboarding with customers to understand roles, hiring goals, must-have signals, and constraints.
Translate customer needs into structured interview flows, role-specific question banks, and scoring rubrics.
Set clear expectations on what “good” looks like (pass/fail thresholds, evaluation rationale, interviewer tone and style).
Voice Agent Conversation Design (Prompts + Flows)
Design, build, and refine prompts and agent logic that drive interviewer behavior, question sequencing, probing, and candidate experience.
Ensure interviewer conversations are consistent, role-relevant, and robust to edge cases (evasive candidates, unclear answers, noisy audio, interruptions).
Implement multi-step structured interview flows with state management and guardrails.
Evaluation & Scoring Systems
Design and maintain AI-based evaluation and scoring aligned to customer rubrics and hiring criteria.
Improve accuracy, consistency, and explainability of scoring at scale (including calibration across roles/customers).
Identify bias/fairness risks and contribute to mitigation strategies and compliant evaluation practices.
Deployment, Iteration, and Customer Feedback Loops
Launch new customer interviewers into production and own iteration cycles from early rollout through steady-state performance.
Use customer feedback + production metrics to prioritize improvements and deliver measurable outcomes.
Communicate changes clearly to customers and internal stakeholders.
Quality, Reliability, and Scale
Build and own lightweight QA/evaluation pipelines to measure conversation quality, scoring accuracy, and reliability before/after changes.
Monitor production performance and partner with engineering to balance quality, latency, and cost tradeoffs.
Contribute to standards and best practices for prompt quality, eval quality, and voice-agent reliability.
2+ years working with LLMs, NLP systems, or AI agents in production.
Demonstrated experience designing and deploying agent workflows (prompts + structured flows) that operate at scale.
Strong understanding of prompt engineering, agent control, failure modes, and conversational edge cases.
Experience building or contributing to evaluation/testing/QA frameworks for AI systems.
Comfort being customer-facing: running technical discovery, translating requirements, and driving onboarding to production.
Strong analytical mindset (accuracy, consistency, bias, calibration, and edge cases).
Familiarity with voice/conversational AI systems, especially real-time or high-volume environments.
Strong Python skills (APIs, data pipelines, eval harnesses, testing frameworks).
Hands-on experience with multiple LLMs (GPT, Claude, Gemini, LLaMA/Mistral, fine-tuned models).
Experience designing multi-step agents with state management and structured outputs.
Experience operating AI systems in production and iterating based on real-world performance metrics.
Prior startup experience (high ownership, fast iteration, ambiguity).
Bachelor’s degree in CS/Engineering/Math or related technical field — or equivalent practical experience.
We’re NYC-based and work hybrid (in-office Mon-Thu). We value in-person collaboration but also trust people to manage their time responsibly.
Competitive salary + meaningful equity. This is a chance to join at a stage where your work meaningfully shapes the product and your career trajectory.