Odyssey is an AI lab pioneering general-purpose world models—a new form of multimodal intelligence unlocking entirely new consumer, enterprise, and intelligence applications. World models are the next major frontier in AI, and Odyssey is leading the way with breakthrough models like Odyssey-2 Pro.
We are looking for an engineer who thrives on building the engines that make groundbreaking research and products possible. You think in systems, love performance, and get energy from turning theoretical bottlenecks into beautifully efficient reality. You’re excited to design and support infrastructure not just for scale, but for speed, creativity, and discovery. You want to build the compute substrate that lets Odyssey’s world models imagine, act, and interact in real time.
Develop and operate our low-latency model inference platform, ensuring high availability, scalability, and efficient resource utilization for Odyssey’s world models.
Engineer and scale our core data processing infrastructure (e.g., Flyte, Ray with k8s) to handle petabyte-scale datasets.
Design, build, and maintain our large-scale, GPU-based training clusters for deep learning, focusing on usability, high throughput and reliability.
Automate infrastructure provisioning, configuration, monitoring, and alerting using Infrastructure as Code (IaC) principles.
Drive performance tuning, cost optimization, and reliability improvements across the entire stack.
Collaborate closely with researchers and product developers to understand their requirements, optimize their workflows, and improve platform usability.
Motivated by building for the frontier: you want to shape the compute and infrastructure foundation of a lab redefining how people create and interact with media.
Strong programming skills (e.g., Python, Go, or similar) and a solid understanding of software engineering best practices.
Deep, hands-on experience with containerization (e.g., Docker), container orchestration (Kubernetes) and Infrastructure as Code (Terraform).
Proven experience building and managing large-scale, distributed systems with GPU computational workloads (e.g., compute platforms, data pipelines, or high-availability services).
Experienced in designing infrastructure for ML workloads where performance, parallelism, and data movement are critical.
A collaborative mindset and excellent communication skills, with a passion for building developer-friendly platforms.