Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
Our Edge Inference team compiles Liquid Foundation Models into optimized machine code that runs on resource-constrained devices: phones, laptops, Raspberry Pis, and watches. We are core contributors to llama.cpp and build the infrastructure that makes efficient on-device AI possible. You will work directly with the technical lead on problems that require deep understanding of both ML architectures and hardware constraints. This is high-ownership work where your code ships to production and directly impacts model performance on real devices.
While San Francisco and Boston are preferred, we are open to other locations.
We need someone who:
Works autonomously: Given a target device and performance goal, you figure out how to get there without hand-holding. You diagnose bottlenecks, prototype solutions, and iterate until you hit the target.
Thinks at the hardware level: You understand cache hierarchies, memory access patterns, and instruction-level optimization. You can reason about why code is slow before reaching for a profiler.
Bridges ML and systems: You understand how neural networks work mathematically (matrix operations, attention mechanisms, quantization effects) and can translate that understanding into optimized implementations.
Ships production code: Our work goes upstream to open-source projects and deploys to customer devices. You write code that others can maintain and extend.
Implement and optimize inference kernels for CPU, NPU, and GPU architectures across diverse edge hardware
Develop quantization strategies (INT4, INT8, FP8) that maximize compression while preserving model quality under strict memory budgets
Contribute to llama.cpp and other open-source inference frameworks, including new model architectures (audio, vision)
Profile and optimize end-to-end inference pipelines to achieve sub-100ms time-to-first-token on target devices
Collaborate with ML researchers to understand model architectures and identify optimization opportunities specific to Liquid Foundation Models
Must-have:
5+ years of experience in systems programming with strong C++ proficiency
Embedded software engineering experience or work on resource-constrained systems
Understanding of ML fundamentals at the linear algebra level (how matrix operations, attention, and quantization work)
Experience with hardware architecture concepts: cache hierarchies, memory bandwidth, SIMD/vectorization
Nice-to-have:
Contributions to llama.cpp, ExecuTorch, or similar inference frameworks
Experience with Rust for systems programming
Background in custom accelerator development (TPU, NPU) or work at companies like SambaNova, Cerebras, Groq, or Google/Amazon accelerator teams
Quantitative degree (mathematics, physics, or similar) combined with engineering experience
Ship optimizations that achieve measurable latency or memory improvements on at least one target edge device class
Successfully upstream at least one significant contribution to llama.cpp (new architecture support, kernel optimization, or quantization improvement)
Own a major workstream end-to-end, such as new model architecture support, quantization pipeline for a device constraint, or target platform enablement
Rare technical challenges: Work on novel model architectures that require custom optimization strategies. Your code ships to production and runs on real devices.
Compensation: Competitive base salary with equity in a unicorn-stage company
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
Financial: 401(k) matching up to 4% of base pay
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year