Ask any employee to explain their benefits, and you'll likely get a confused shrug. Ask any HR leader if their benefits investment is working, and you'll get an honest "I don't know." For employees, benefits are confusing and overwhelming. For HR and finance leaders, benefits are the second-largest expense, but they lack the visibility to know what's working and what's wasted.
At Avante, we're changing that. We're the first AI-native benefits platform built to turn benefits complexity into clarity. For employees, Avante acts like a personal guide, making benefits simple to understand and use. For leaders, Avante unifies fragmented data and delivers real-time insights so they can improve programs, control costs, and prove ROI.
We're based in Seattle and work 4 days a week in the office (one day remote), we’re growing quickly, and are looking for a data engineer who can build and maintain the pipelines and infrastructure that power our AI systems and internal analysis. You'll work across the full data lifecycle - from ingestion and transformation to model deployment and monitoring - while using modern AI-assisted development tools to accelerate your work. This is a hands-on role for someone who cares about building reliable, well-documented systems and wants to work at the intersection of data engineering, MLOps, and applied AI.
What You'll Do:
Develop and maintain DBT models in Snowflake with proper testing and documentationDesign and build Python data pipelines that are reliable, testable, and observableOrchestrate workflows using Argo-Workflows running on KubernetesTransform raw source data into clean, well-modeled datasets for analysts and ML systemsManage and improve Azure cloud infrastructure using terraformWork on RAG pipelines and vector stores for AI agentsStay current with developments in applied AI and bring relevant innovations to the teamUse tools like Claude Code and Cursor as part of your daily workflowHelp establish team practices for effective human-AI collaboration in engineering
What We're Looking For:
3+ years experience building production DBT or other SQL based transforms with testing, documentation, and CI/CDExperience working with python with an emphasis on readable, maintainable codeFamiliarity with Kubernetes and container orchestrationAbility to communicate technical concepts clearly to diverse audiencesFlexibility and willingness to wear many hats and help out wherever it is needed
Nice to Have:
Early stage startup experienceAnalysis or analytics engineering experienceFull stack development experienceHands-on MLOps experience (model training pipelines, deployment, monitoring)Experience with AI prompt engineering and LLM application developmentExperience managing cloud infra such as Kubernetes clustersFamiliarity with clinical and medical insurance claims data
Our Tech Stack:
DBT for transformation workflows and data modelingSnowflake for data warehousingPython for data loading, tokenization, and manipulation (Pandas, Polars) and integration with our AI platformData orchestration using Argo-Workflow DAGsAzure cloud infrastructure with Kubernetes
Our Company Values:
Beat Yesterday – Continuous improvement, innovation, and growthEmbrace Type 2 Fun – Resilience and positivity in the face of challengesAct Like an Owner – Initiative, accountability, and focus on outcomesStay Hungry, Stay Curious – Humility, curiosity, and bold thinking
Interview Process:
1. Initial screen with our recruiters
2. Phone screen with our head of engineering
3. On-Site Interview Loop
1. Technical interview focused on real-world problem solving (no whiteboard algorithms!)
2. Behavioral interview to explore your approach to teamwork and challenges
3. 1:1 with our PM
4. Final loop with our Head of Engineering
4. Final conversation with our CEO
If you're excited about solving complex data challenges in the benefits space while making a meaningful impact on businesses and their employees, we'd love to hear from you!