OutcomesAI is a healthcare technology company building an AI-enabled nursing platform designed to augment clinical teams, automate routine workflows, and safely scale nursing capacity.
Our solution combines AI voice agents and licensed nurses to handle patient communication, symptom triage, remote monitoring, and post-acute care — reducing administrative burden and enabling clinicians to focus on direct patient care.
Our core product suite includes:
● Glia Voice Agents – multimodal conversational agents capable of answering patient calls, triaging symptoms using evidence-based protocols (e.g., Schmitt-Thompson), scheduling visits, and delivering education and follow-ups.
● Glia Productivity Agents – AI copilots for nurses that automate charting, scribing, and clinical decision support by integrating directly into EHR systems such as Epic and Athena.
● AI-Enabled Nursing Services – a hybrid care delivery model where AI and licensed nurses work together to deliver virtual triage, remote patient monitoring, and specialty patient support programs (e.g., oncology, dementia, dialysis).
Our AI infrastructure leverages multimodal foundation models — incorporating speech recognition (ASR), natural language understanding, and text-to-speech (TTS) — fine-tuned for healthcare environments to ensure safety, empathy, and clinical accuracy. All models operate within a HIPAA-compliant and SOC 2–certified framework. OutcomesAI partners with leading health systems and virtual care organizations to deploy and validate these capabilities at scale. Our goal is to create the world’s first AI + nurse hybrid workforce, improving access, safety, and efficiency across the continuum of care.
Contribute to training, evaluation, and integration of speech models into OutcomesAI’s voice intelligence stack.You’ll work closely with the Tech Lead to develop datasets, fine-tune models, and benchmark performance across domains (RPM, Triage).
What You’ll Do
Prepare and maintain synthetic and real training datasets.STT/TTS/Speech LLM model training: from model selection → fine-tuning → deployment.Build evaluation for clinical applications (RPM, Triage, inbound/outbound).Build scripts for data selection, augmentation (noise, codec, jitter), and corpus curation.Fine-tune models using CTC/RNN-T or adapter-based recipes on multi-GPU systems.Implement evaluation pipelines to measure WER, entity F1, and latency; automate MLflow logging.Experiment with bias-aware training and context list conditioning.Collaborate with backend and DevOps teams to integrate trained models into inference stacks.Support creation of context biasing APIs and LM rescoring paths.Assist in maintaining benchmarks versus commercial baselines (Deepgram, Whisper, etc.).
Desired Skills
Strong programming in Python (PyTorch, Hugging Face, NeMo, ESPnet).Practical experience in audio data processing, augmentation, and ASR fine-tuning.Training: SpecAugment , speed perturb, noise/RIRs, codec+PLC+jitter sims for PSTN/WebRTC.Streaming ASR: Transducer/zipformer with chunked attention, frame-sync beam search, endpointing (VAD-EOU) tuning.Context biasing: WFST boosts + neural re-scoring; patient/name dictionaries; session-aware bias refresh.Familiarity with LoRA/adapters, distributed training, mixed precision.Proficiency with evaluation frameworks WER/sWER, Entity-F1, DER/JER, MOSNet/BVCC (TTS), PESQ/STOI (telephony), RTF/latency at P95/P99. and MLflow logging.Frameworks: Espnet, Speech brain, Nemo, Kaldi/K2, Livekit, Pipecat, DiffyUnderstanding of telephony speech characteristics, accents, and distortions.Collaborative mindset for cross-functional work with ML-ops and QA.
Qualifications
B.Tech / M.Tech / M.S. in Computer Science, AI, or related field.4–7 years in applied ML; ≥2 years focused on speech recognition or synthesis.Experience with model deployment workflows preferred.