Gimlet Labs is building the first heterogeneous neocloud for AI workloads. As AI systems scale, the industry is hitting fundamental limits in power, capacity, and cost with today’s homogeneous, vertically integrated infrastructure. Gimlet addresses this by decoupling AI workloads from the underlying hardware. Our platform intelligently partitions workloads into components and orchestrates each component to hardware that best fits its performance and efficiency needs. This approach enables heterogeneous systems across multi-vendor and multi-generation hardware, including the latest emerging accelerators. These systems unlock step-function improvements in performance and cost efficiency at scale.
On top of this foundation, Gimlet is building a production-grade neocloud for agentic workloads. Customers use Gimlet to deploy and manage their workloads through stable, production-ready APIs, without having to reason about hardware selection, placement, or low-level performance optimization.
Gimlet works with foundation labs, hyperscalers, and AI native companies to power real production workloads built to scale to gigawatt-class AI datacenters.
Gimlet Labs is seeking a Member of Technical Staff (Intern) to help develop Gimlet’s platform for deploying and monitoring AI workloads. In this role, you will be applying the latest AI techniques to develop frameworks to help generate and optimize AI workloads. You will contribute to Gimlet’s novel compilation framework for partitioning and orchestrating AI workloads across diverse hardware environments. You will design and implement scalable systems that can run production workloads of millions of requests a second.
Responsibilities:
Building, deploying and scaling AI systems for production
Evaluating and implementing cutting-edge AI research
Researching ways to improve model accuracy, performance and efficiency
Qualifications:
Currently pursuing degree in computer science, engineering, or comparable area of study
Experience with AI/ML or distributed systems.
Preferred Qualifications:
Experience with PyTorch, TensorFlow, ONNX and other AI frameworks
Familiarity with distributed systems and orchestration frameworks (e.g., Kubernetes)
Software development experience with Python and C++
Understanding of the latest AI research and techniques