Our mission at Oura is to empower every person to own their inner potential. Our award-winning products help our global community gain a deeper knowledge of their readiness, activity, and sleep quality by using their Oura Ring and its connected app. We've helped millions of people understand and improve their health by providing daily insights and practical steps to inspire healthy lifestyles.
Empowering the world starts with living our values and empowering our team. As a quickly growing company focused on helping people live healthier and happier lives, we ensure that our team members have what they need to do their best work — both in and out of the office.
Oura’s engineering organization consists of talented developers distributed across the EU and US. For day-to-day feature work, our engineers are organized into smaller cross-functional teams. Our teams have a great deal of autonomy and are responsible for the design, development and architecture of their features. Teams take full ownership of their code and handle everything from concepting, design and implementation to release, maintenance and bug fixes.
About the role
The Health Intelligence team is at the forefront of integrating modern AI and LLMs into the Oura experience, transforming how members interact with and learn from their data. We are building the next generation of AI-powered health guidance at Oura, blending traditional ML with modern LLMs, reasoning systems, and robust evaluation - not as “chatbots with vibes,” but as rigorously evaluated components that explain decisions, surface trade-offs, and adapt member journeys over months and years.
As a Senior AI Scientist, you will:
- Design and validate the scientific backbone of Health Intelligence: models and frameworks that detect user state, forecast trajectories, and select interventions that are both effective and safe.
- Treat behavior change and health impact - not clicks or short-term engagement - as first-class success metrics.
- Work end-to-end: from framing hypotheses and building models to partnering with engineering on productionization and establishing robust evaluation strategies.
This role is ideal for someone who wants to combine data science, causal reasoning, and product impact - owning the science behind real user-facing features in a domain where uncertainty, safety, and long-term outcomes matter.
What you will do
You don’t need to do all of these on day one, but these are the kinds of problems you’ll own:
- Model user state, readiness, and constraints: Define representations for where a user is today across physiology, behavior, context, and constraints. Build models that estimate readiness for change, likely trajectories, and when interventions are feasible vs. when to back off.
- Design and evaluate intervention strategies: Formalize strategies for when to nudge, what to recommend, and when to stay quiet across AI-powered guidance surfaces and programs. Explore personalization across goals, past behavior, and health context while explicitly modeling safety, uncertainty, and acceptable risk.
- Build measurement frameworks for behavior and outcomes: Define proximal metrics and longer-term outcome proxies that reflect real behavior change and health impact, not just short-term engagement. Design experimentation and evaluation strategies to understand what actually works in practice.
- Own causal and longitudinal reasoning in the system: Apply causal inference, uplift modeling, or related approaches to separate correlation from causation in product and policy decisions. Work with longitudinal time-series and user behavior data to understand trajectories, relapse, and sustained change over weeks, months, and years.
- Partner with platform and personalization owners: Collaborate with owners of knowledge, ranking/prioritization, and journey/orchestration systems to embed scientific abstractions directly into system design. Contribute to the interpretation layer by ensuring explanations for users are faithful to the underlying evidence and uncertainty, including where LLMs or other models are used to surface content.
- Establish robust evaluation and guardrails for AI workflows: Help define evaluation pipelines and rubrics for AI- and LLM-powered guidance, including safety, quality, and fairness checks. Work with engineering to integrate these into production workflows so new models and policies can be tested, monitored, and iterated safely.
- Collaborate cross-functionally and communicate clearly: Partner with product, engineering, data, content, and research to shape problem definitions, constraints, and success metrics. Communicate assumptions, trade-offs, and uncertainty clearly to both technical and non-technical stakeholders, influencing decisions in a fast-moving, ambiguous domain.
Requirements
We’d love to hear from you if you have:
- Several years of experience with Data Science or similar role working on ML-powered products, with a track record of shipping your work into real, operational systems rather than keeping it in offline analyses or research prototypes.
- Hands-on experience with causal inference, uplift modeling, treatment effect estimation, or counterfactual evaluation, and the ability to clearly distinguish correlation from causation and choose appropriate methods under real-world constraints.
- Experience working with time-series or longitudinal user behavior data, including framing problems over weeks and months rather than single sessions, and familiarity with modeling approaches for sequences, trajectories, or stateful processes.
- Practical experience designing and analyzing experiments, defining hypotheses and metrics, slicing results, and interpreting findings with appropriate caution while thinking in terms of proximal metrics, leading indicators, and longer-term outcomes.
- Strong proficiency in a scientific programming language such as Python, including data analysis and modeling libraries, as well as experience with modern data tooling in collaboration with data and engineering partners.
- Experience working in product-facing teams with engineers, PMs, and designers, excellent communication skills for explaining complex methods, uncertainty, and trade-offs to diverse audiences, and comfort operating in a fast-changing AI/LLM domain while balancing rigor with pragmatism and keeping member safety and value at the center.
Would be a benefit
You don’t need all of these, but they’re strong signals of great fit:
- Experience building or analyzing intervention or recommendation systems where the primary goal is to change behavior or outcomes, not just to predict or rank.
- Exposure to digital health, wearables, behavior change, or related domains, and genuine interest in long-term engagement and outcomes.
- Familiarity with personalization, recommender systems, or decision systems, including concepts like multi-objective optimization, constraints, and guardrails for safety and fairness.
- Experience with AI / LLM-backed products and evaluation workflows, such as LLM-as-judge, rubric-based evaluation, safety/red-teaming, and offline vs. online assessment of model quality, latency, and cost.
- Prior experience mentoring other scientists or data practitioners, or shaping best practices around experimentation, causal analysis, and evaluation within a team.
- Experience working asynchronously across countries and time zones in cross-functional product teams.
What we offer
- Competitive salary
- Lunch benefit
- Wellness benefit
- Flexible working hours
- Collaborative, smart teammates
- An Oura ring of your own
- Wellness Time Off
If this sounds like the next step for you, please send us your application and CV as soon as possible. We aim to start interviewing as soon as suitable candidates are found.
Oura is proud to be an equal opportunity workplace. We celebrate diversity and are committed to creating an inclusive environment for all employees. Individuals seeking employment at Oura are considered without regard to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws. We will not tolerate discrimination or harassment based on any of these characteristics.
We will work to ensure individuals with disabilities are provided reasonable accommodation to participate in the interview process, to perform essential job functions, and to receive other benefits and privileges of employment.
Disclaimer: Beware of fake job offers!
We’ve been alerted to scammers posing as ŌURA recruiters, especially for remote roles. Please note:
- Our jobs are listed only on the ŌURA Careers page and trusted job boards.
- We will never ask for personal information like ID or payment for equipment upfront.
- Official offers are sent through Docusign after a verbal offer, not via text or email.
Stay cautious and protect your personal details.
To all recruitment agencies: Oura does not accept agency resumes. Please do not forward resumes to our jobs alias, Oura employees, or any other organization's location. Oura is not responsible for any fees related to unsolicited resumes.