About Dexmate
Dexmate is building the platform for physical AI — general-purpose humanoid robots and the full-stack infrastructure powering them. Our mission is to make physical AI accessible to every developer, the way cloud computing made software infrastructure accessible to everyone before it. Our developer community is early — the opportunity to shape it from the ground floor is real.
The role
Many developers have never deployed a model that moves something in the physical world. The gap between "it works in training" and "it works on a robot" is enormous.
That's your job.
You'll be the person who helps AI/ML developers understand what happens when their models leave the data center and run on a humanoid robot: latency constraints, sensor noise, sim-to-real transfer, on-device inference, closed-loop control, etc. You'll build the sample projects, write the tutorials, and create the content that makes Dexmate the platform serious AI engineers choose when they want to work on physical AI.
This is an engineering role first. You write code every week. The talks and tutorials come from building, not the other way around.
What you'll do
Build and publish sample projects that show AI/ML engineers how to train, fine-tune, and deploy models on the Dexmate platform.
Write and publish technical tutorials weekly — step-by-step guides, architecture explainers, and deployment walkthroughs written for engineers who know ML but are new to physical AI.
Own the SDK documentation for AI/ML workflows: quickstart guides, API reference, Python SDK samples, kept current within 48 hours of any platform change.
Answer developer questions daily in Discord and GitHub Discussions — no question unanswered within 24 hours.
Build reference integrations with foundation AI model providers and publish architecture guides for running their models on Dexmate robots.
Speak at AI/ML conferences 3–4 times per year — NeurIPS, ICLR, ICML, CoRL, and similar.
Run live demos for developers, partners, and enterprise prospects.
Surface model integration friction and missing platform capabilities to engineering weekly.
Who you are
You write Python fluently and have real ML engineering experience — model training, fine-tuning, inference optimization, or ML infrastructure. You've shipped models that ran in production.
Curious about the physical world. You don't need a robotics background, but you find the question "what happens when this model controls a robot arm" genuinely interesting, not intimidating.
You've published technical content that got traction — a GitHub repo people starred, a tutorial people bookmarked, a blog post that circulated in ML communities. Show us.
You write code other engineers want to copy. Clean, documented, opinionated about the right way to do things.
You can write a clear getting-started guide for a developer who just signed up and give a credible technical talk to a room of ML researchers. Same depth, different registers.
3+ years of ML engineering experience — model development, training infrastructure, inference, or MLOps
You publish on a schedule. The failure mode this role avoids is someone who plans great content but never ships it.
Strong bonus:
Experience with vision-language-action models or embodied AI research
Hands-on sim-to-real transfer work
Isaac Sim, MuJoCo, Drake, or Genesis familiarity
Existing technical blog, GitHub, or YouTube with real traction
Open-source ML contributions
Experience deploying models on edge or embedded hardware
If you've ever wanted to be the person who explains physical AI to the world — this is that job.