Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
Our Data team builds the pipelines that power Liquid Foundation Models across pre-training, vision, audio, and emerging modalities. As public data sources plateau and model performance increasingly depends on purpose-built datasets, synthetic data generation has become central to our strategy. We need engineers who can collect, filter, and synthesize high-quality datasets at scale for pretraining, midtraining, SFT, and preference optimization.
Depending on your background, you'll work on foundation model pre-training data, post-training RL, vision-language datasets, audio pipelines, RL environments, or multimodal data challenges. Across all of these, you'll design and operate synthetic data pipelines that close gaps where organic data falls short. We'll match you to the team where you'll have the most impact.
While San Francisco and Boston are preferred, we are open to other locations.
We need someone who:
Builds production data pipelines: Our team processes web-scale dataset with trillions of tokens. We need engineers who build systems that power the scaling-laws of our models.
Understands data quality: Filtering, deduplication, augmentation, bias detection, and synthetic generation. You know that data quality is non-negotiable.
Stays current: Public datasets drop constantly. You watch HuggingFace, arXiv, and know when something matters.
Solve problems independently to power the team: Our data teams are part of their respective training groups, such as pre-training or multi-modal understanding. You will work closely with the team, but are expected to solve problems on your own.
Build and maintain data processing, cleaning, filtering, and selection pipelines at scale
Create and maintain pipelines for pretraining, midtraining, SFT, and preference optimization datasets
Monitor and evaluate public datasets across text, vision, and audio domains
Create synthetic data generation and augmentation pipelines
Build crawlers to gather datasets from the web where public data is lacking
Run ablations to assess dataset quality and inform training decisions
Collaborate with pre-training, vision, and audio teams on modality-specific data needs
Must-have:
5+ years of relevant work experience with a B.S., 3+ years with an M.S., or 1+ year with a Ph.D.
Expertise in data curation, cleaning, augmentation, and synthetic data generation
Experience with LLMs and ML frameworks (PyTorch)
Strong Python skills with emphasis on clean, scalable code
Nice-to-have:
Experience with VLMs, computer vision, or audio data pipelines
Distributed training familiarity (DeepSpeed, FSDP, Megatron-LM)
First-author publications in top ML conferences (NeurIPS, ICML, ICLR, CVPR)
Contributions to open-source projects
You own a critical data pipeline end-to-end for one of our modalities
The data quality improvements you shipped have measurably improved model performance
You've identified and integrated at least one external dataset that moved the needle
Impact at scale: Your pipelines directly determine model quality across all of Liquid's foundation models.
Compensation: Competitive base salary with equity in a unicorn-stage company
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
Financial: 401(k) matching up to 4% of base pay
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year