Institute of Foundation Models

Foundation Model DevOps Engineer

Institute of Foundation Models Sunnyvale, CA Today
engineering
About the Institute of Foundation Models 
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy. 
As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers. 

The Role 
We are seeking a Foundation Model DevOps Engineer focused on Operational Stability to serve as the backbone of our AI research infrastructure. 
You will be designing the friction-free environment that allows our models to be built. Your mandate is to build the tooling, release pipelines, and storage policies that remove drag on our research team. You will own the "foundational layer", ensuring that our researchers have immediate, secure, and reliable access to the tools, data, and compute they need. 

Key Responsibilities 

Model Release Engineering 
High-Fidelity Release Management: You own the standard of our public presence. You ensure that every release (weights, code, training logs, data) is reproducible, meticulously documented, and packaged with the polish of a top-tier open-source product. CI/CD for Research: Design and implement pipelines that automate the testing and packaging of complex model releases, moving us away from manual handovers to automated verification. Repo Administration: Administer the organization’s GitHub Enterprise account, ensuring branch protection and clean versioning practices are enforced across the lab. 

Resource Management & Infrastructure Efficiency 
Compute Governance: Manage the efficiency of our large-scale GPU resources. You track utilization to identify idle nodes, "zombie jobs," or inefficient scheduling, ensuring we extract maximum value from our compute clusters. Storage Strategy & Hygiene: Manage the lifecycle of petabyte-scale datasets and checkpoint storage. You implement intelligent aging policies to solve the "disk full" bottleneck without risking critical data loss. Quota & Access Logic: Proactively manage storage and compute quotas across research teams to prevent resource contention before it blocks a training run. 

Research Tooling & Orchestration 
Experiment Management Systems: Build and maintain the internal CLI tools and dashboards that allow researchers to launch, track, and organize jobs across thousands of GPUs. Resource Telemetry: Set up real-time monitoring for interconnect throughput, GPU memory, and file system latency to catch performance degradation instantly. 
Job Orchestration: Work closely with infrastructure teams to optimize how we run synthetic data pipelines and large-scale evaluations, ensuring our tooling scales with our compute. 

Research Environment Provisioning 
Automated Workspace Setup: Build the scripts and tooling that instantly provision compute environments, permissions, and storage namespaces for researchers (automating away the manual work). Cluster Access Architecture: Streamline SSH and node access protocols to ensure friction-free entry to our massive-scale compute clusters while maintaining security boundaries. 

Academic Qualifications 
A bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent practical experience. 

Professional Experience - Minimum (The Bar) 
3+ years of experience in DevOps, Release Engineering, or MLE, specifically within AI/ML or HPC environments
Foundation Model Fluency: You understand the lifecycle of training large models (LLMs or Diffusion). You know what a checkpoint is, you understand the difference between pre-training and inference, and you are familiar with the artifacts required for a model release. 
Linux/Unix Fluency: You live in the command line. You have deep expertise in bash scripting, file system permissions, and SSH configuration. 
Version Control Admin: Expert-level administration of GitHub Enterprise (managing teams, API limits, and repository security). 
Scripting & Automation: Proficiency in Python or Bash to automate repetitive administrative tasks. 

Professional Experience - Preferred (The Fit) 
"Gold Standard" Open Source: Experience contributing to or managing high-profile open-source releases (Hugging Face libraries, model families, datasets). 
HPC Schedulers: Deep understanding of Slurm job scheduling and troubleshooting. 
Cloud Storage: Familiarity with cloud storage buckets (S3/GCP) and efficient data transfer tools.

Sponsored

Explore Engineering

Skills in this job

People also search for