Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
Our Data team powers Liquid Foundation Models across pre-training, vision, audio, and emerging modalities. Public data sources are plateauing. Model performance increasingly depends on purpose-built datasets. We need ML-minded engineers who can collect, filter, and synthesize high-quality data at scale.
We treat data as a research problem, not an infrastructure problem. Our engineers run experiments, design ablations, and measure how data decisions move model quality. We will match you to the team where you can grow the fastest and have the most impact: pre-training, post-training RL, vision-language, audio, or multimodal.
While San Francisco and Boston are preferred, we are open to other locations.
We need someone who:
Thinks like a researcher, ships like an engineer: We need people who form hypotheses, run experiments, and measure results. Our engineers understand deep-theoretical research, and our researchers ship production systems.
Learns fast and adapts: We work across modalities that evolve weekly. We need people who pick up new domains quickly and thrive with ambiguity.
Obsesses over data quality: We believe data quality is non-negotiable. Filtering, deduplication, augmentation, and evaluation are first-class concerns for our team, not afterthoughts.
Solves problems independently: Our data engineers sit within training groups (pre-training and multimodal). We collaborate closely, but we expect ownership and self-direction.
Build and maintain data processing, filtering, and selection pipelines at scale
Create pipelines for pretraining, midtraining, SFT, and preference optimization datasets
Design synthetic data generation systems using LLMs, structured prompting, and domain-specific generators
Design and run evaluations and ablations to measure dataset's impact on model performance
Monitor public datasets across text, vision, and audio domains
Collaborate with pre-training, vision, and audio teams on modality-specific data needs
Must-have:
Strong Python skills with the ability to quickly comprehend problems and translate them into clean, working code
Solid ML fundamentals: experience training, evaluating, and iterating on models (PyTorch preferred)
Track record of learning new technical domains quickly
3+ years relevant experience with an M.S., or 1+ year with a Ph.D. (5+ years with a B.S.)
Nice-to-have:
Experience with synthetic data generation, data curation, or ML evaluation (designing evals, benchmarking, measuring data and model quality)
Experience with LLMs, VLMs, computer vision, or audio data pipelines
Open-source contributions or publications at NeurIPS, ICML, ICLR, or CVPR
You own a critical data pipeline end-to-end for one of our modalities
You have built or improved data systems that measurably moved model performance
You have identified and integrated at least one external dataset that moved the needle
Impact at scale: Your pipelines directly determine model quality across all of Liquid's foundation models.
Compensation: Competitive base salary with equity in a unicorn-stage company
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
Financial: 401(k) matching up to 4% of base pay
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year