Odyssey is an AI lab pioneering general-purpose world models—a new form of multimodal intelligence unlocking entirely new consumer, enterprise, and intelligence applications. World models are the next major frontier in AI, and Odyssey is leading the way with breakthrough models like Odyssey-2 Pro.
We're seeking those who are obsessed with gaining every last drop of performance from complex systems. We're building inference infrastructure to scale to hundreds of thousands of users within a year, while also working with massive, ever-growing datasets and models in training. Your focus will be ensuring our models deliver exceptional speed, reliability, and scalability in both the training and inference phases, optimizing efficiency to minimize TFLOPS per user and training compute cost.
Optimize models that will be used in real-time by hundreds of thousands of users.
Design and implement distributed training strategies to reduce training time and resource consumption on large GPU clusters.
Partner with our elite team of ML researchers and engineers to ensure model architectures are highly performant from conception.
Develop sophisticated tools to identify performance bottlenecks and stability issues in both training and serving environments.
Pioneer innovative approaches, frameworks, and system designs that enhance performance metrics across our model development and inference infrastructure.
Have significant autonomy in technical decisions.
Use the latest-generation GPUs.
8+ years of software engineering experience, with significant work in ML performance.
Deep insight into modern machine learning architectures with a natural instinct for performance optimization, particularly distributed training and inference.
Track record of owning projects end to end.
Problem-solving mindset with the ability to acquire new skills as needed.
Proficiency with PyTorch (or TF/JAX) and Triton as well as NVIDIA GPU ecosystems and optimization stacks.
Highly metric-based.