OutcomesAI is a healthcare technology company building an AI-enabled nursing platform designed to augment clinical teams, automate routine workflows, and safely scale nursing capacity.
Our solution combines AI voice agents and licensed nurses to handle patient communication, symptom triage, remote monitoring, and post-acute care — reducing administrative burden and enabling clinicians to focus on direct patient care.
Our core product suite includes:
● Glia Voice Agents – multimodal conversational agents capable of answering patient calls, triaging symptoms using evidence-based protocols (e.g., Schmitt-Thompson), scheduling visits, and delivering education and follow-ups.
● Glia Productivity Agents – AI copilots for nurses that automate charting, scribing, and clinical decision support by integrating directly into EHR systems such as Epic and Athena.
● AI-Enabled Nursing Services – a hybrid care delivery model where AI and licensed nurses work together to deliver virtual triage, remote patient monitoring, and specialty patient support programs (e.g., oncology, dementia, dialysis).
Our AI infrastructure leverages multimodal foundation models — incorporating speech recognition (ASR), natural language understanding, and text-to-speech (TTS) — fine-tuned for healthcare environments to ensure safety, empathy, and clinical accuracy. All models operate within a HIPAA-compliant and SOC 2–certified framework. OutcomesAI partners with leading health systems and virtual care organizations to deploy and validate these capabilities at scale. Our goal is to create the world’s first AI + nurse hybrid workforce, improving access, safety, and efficiency across the continuum of care.
Lead the end-to-end technical development of speech models (ASR, TTS, Speech-LLM) — from architecture, training strategy, and evaluation to production deployment.You’ll act as an individual contributor and mentor, guiding a small team working on model training, synthetic data generation, active learning, and inference optimization for healthcare applications. As a Tech Lead specializing in ASR, TTS, and Speech LLM, you will spearhead the technical development of speech models. This involves everything from architectural design and training strategies to evaluation and production deployment.
This role is a blend of individual contribution and mentorship. You will guide a small team focused on model training, synthetic data generation, active learning, and inference optimization, all within the context of healthcare applications.
What You’ll Do
* Prepare and maintain synthetic and real training datasets.
* STT/TTS/Speech LLM and LLM training: model selection → fine-tuning → evaluation → deployment.
* Build evaluation for clinical applications (RPM, triage, inbound/outbound).
* Build scripts for data selection, augmentation (noise, codec, jitter), and corpus curation.
* Fine-tune speech models using CTC/RNN-T or adapter-based recipes on multi-GPU systems.
* Fine-tune LLMs using PEFT (LoRA/QLoRA/adapters) and preference methods such as DPO/RLHF where needed.
* Implement evaluation pipelines to measure WER/sWER, entity F1, safety/quality metrics, and latency; automate MLflow logging.
* Experiment with bias-aware training and context list conditioning.
* Collaborate with backend and DevOps teams to integrate trained models into inference stacks.
* Support creation of context biasing APIs and LM rescoring paths.
* Assist in maintaining benchmarks versus commercial baselines (Deepgram, Elevenlabs, Cartesia, Whisper, etc.).
* Optimize inference latency/cost for speech and LLM serving (batching, kv-cache, quantization, caching, autoscaling).
Desired Skills
* Strong programming in Python (PyTorch, Hugging Face, NeMo, ESPnet).
* Practical experience in audio data processing, augmentation, and ASR fine-tuning.
* Training: SpecAugment, speed perturb, noise/RIRs, codec+PLC+jitter sims for PSTN/WebRTC.
* Streaming ASR: Transducer/zipformer with chunked attention, frame-sync beam search, endpointing (VAD-EOU) tuning.
* Context biasing: WFST boosts + neural re-scoring; patient/name dictionaries; session-aware bias refresh.
* Familiarity with LoRA/QLoRA/adapters, distributed training, mixed precision.
* Experience with LLM alignment and evaluation (SFT, DPO/RLHF, tool calling reliability, hallucination/safety checks).
* Proficiency with evaluation frameworks: WER/sWER, Entity-F1, DER/JER, MOSNet/BVCC (TTS), PESQ/STOI (telephony), RTF/latency at P95/P99, and MLflow logging.
* Inference/serving familiarity: vLLM/Triton, quantization, kv-cache, batching, and performance tuning.
* Frameworks: ESPnet, SpeechBrain, NeMo, Kaldi/K2, Livekit, Pipecat, Dify.
* Understanding of telephony speech characteristics, accents, and distortions.
* Collaborative mindset for cross-functional work with ML-ops and QA.
Qualifications
M.S. / Ph.D. in Computer Science, Speech Processing, or related field.
7–10 years of experience in applied ML, at least 3 in speech or multimodal AI.
Track record of shipping production ASR/TTS models or inference systems at scale.