OutcomesAI is a healthcare technology company building an AI-enabled nursing platform designed to augment clinical teams, automate routine workflows, and safely scale nursing capacity.
Our solution combines AI voice agents and licensed nurses to handle patient communication, symptom triage, remote monitoring, and post-acute care — reducing administrative burden and enabling clinicians to focus on direct patient care.
Our core product suite includes:
● Glia Voice Agents – multimodal conversational agents capable of answering patient calls, triaging symptoms using evidence-based protocols (e.g., Schmitt-Thompson), scheduling visits, and delivering education and follow-ups.
● Glia Productivity Agents – AI copilots for nurses that automate charting, scribing, and clinical decision support by integrating directly into EHR systems such as Epic and Athena.
● AI-Enabled Nursing Services – a hybrid care delivery model where AI and licensed nurses work together to deliver virtual triage, remote patient monitoring, and specialty patient support programs (e.g., oncology, dementia, dialysis).
Our AI infrastructure leverages multimodal foundation models — incorporating speech recognition (ASR), natural language understanding, and text-to-speech (TTS) — fine-tuned for healthcare environments to ensure safety, empathy, and clinical accuracy. All models operate within a HIPAA-compliant and SOC 2–certified framework. OutcomesAI partners with leading health systems and virtual care organizations to deploy and validate these capabilities at scale. Our goal is to create the world’s first AI + nurse hybrid workforce, improving access, safety, and efficiency across the continuum of care.
Own the infrastructure and pipelines for integrating trained ASR/TTS/Speech-LLM models into production. Focus on scalable serving, GPU optimization, monitoring, and continuous improvement of inference latency and reliability.
What You’ll Do
Containerize and deploy speech models using Triton Inference Server with TensorRT/FP16 optimizations.Develop and manage CI/CD pipelines for model promotion (staging → production).Configure autoscaling on Kubernetes (GPU pools) based on active calls or streaming sessions.Build health and observability dashboards: latency, token delay, WER drift, SNR/packet loss monitors.Integrate LM bias APIs, failover logic, and model switchers for fallback to larger/cloud models.Implement on-device or edge inference paths for low-latency scenarios.Collaborate with AI team to expose APIs for context biasing, rescoring, and diagnostics.Optimize GPU/CPU utilization, cost optimization, and memory footprint for concurrent ASR/TTS/Speech LLM workloads.Maintain data and model versioning pipelines with MLflow, DVC, or internal registries.
Desired Skills
Experience with Triton, TensorRT, Docker, Kubernetes, and GPU scheduling.Familiarity with speech inference (streaming ASR, TTS pipelines).Proficient in Python, Bash, and cloud services (AWS/GCP/Azure).Understanding of observability stacks (Prometheus, Grafana, ELK).Knowledge of DevSecOps, access policies, and PHI-safe environments.Interest in inference optimization, mixed precision, and quantization.
Qualifications
B.Tech / M.Tech in Computer Science or related field.4–6 years of experience in backend or ML-ops; at least 1–2 years with GPU inference pipelines.Proven experience deploying models to production environments with measurable latency gains.