The Bigger Picture
At Andromeda Robotics, we’re not just imagining the future of human-robot relationships; we’re building it. Abi is the first emotionally intelligent humanoid companion robot, designed to bring care, conversation, and joy to the people who need it most. Backed by tier-1 investors and with customers already deploying Abi across aged care and healthcare, we’re scaling fast, and we’re doing it with an engineering-first culture that’s obsessed with pushing the limits of what’s possible.
This is a rare moment to join: we’re post-technical validation, pre-ubiquity, and building out the team that will take Abi from early access to global scale.
Our Values
- Empathy – Kindness and compassion are at the heart of everything we do.
- Play – Play sharpens focus. It keeps us curious, fast and obsessed with the craft.
- Never Settle – A relentless ambition, bias toward action, and uncomfortable levels of curiosity.
- Tenacity – Tenacious under pressure, we assume chaos and stay in motion to adapt and progress.
- Unity – Different minds. Shared mission. No passengers.
The Role
We are looking for a creative and driven Simulation and Test Engineer to own the simulation and test infrastructure, and be responsible for developing the Abi acceptance criteria that underpins both Abi’s autonomous navigation and conversational AI & embodied behaviours, ideally both, but a deep understanding in at least one of these areas is essential. Your work will set the foundations for us to extend those simulation environments to generate synthetic data for our Machine Learning (ML) models.
The Team
You’ll work at the intersection of:
- Conversational AI
- Robotics & controls
- Perception & audio engineering
- Autonomy
- Platform engineering
You’ll collaborate closely with product owners and technical leads to define requirements and their test case and be the custodian of quality across our autonomy and AI/ML stack. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe before it ever touches physical hardware or real users.
We know this is quite a specialised skill set, so if you have experience across at least ~60% of what we’re looking for, we’d still love to hear from you! You might still be a great right fit.
What You’ll Do
Architect & Build Simulation and Test Platforms
- Design, develop and maintain a scalable, high-fidelity simulation platform for Abi that supports both navigation and embodied interaction use cases.
- Own sim-to-real and test-to-deployment: Develop robust CI/CD pipelines for automated testing in simulation and synthetic test environments, enabling rapid iteration and guaranteeing software quality before deployment onto our physical robots.
- Model with fidelity: Implement accurate models of Abi’s hardware, including sensors (cameras, microphones, LiDAR, etc.), actuators, kinematics and upper-body motion as needed for both navigation and interaction scenarios.
Develop Worlds, Scenarios and Test Suites
- Develop virtual worlds and test scenarios:
- Navigation-focused environments (indoor facilities, dynamic human traffic, obstacles, edge cases)
- Conversational & social interaction scenarios (multi-speaker audio scenes, social group configurations, gesture contexts)
- Conversational AI & memory testing: Build synthetic test environments for:
- Voice-to-voice conversational quality and response appropriateness
- Tool-calling / action selection behaviour
- Memory systems – context retention, recall accuracy, conversation coherence
- Perception & audio testing: Create test suites and synthetic scenes for:
- Social awareness (face detection, gaze tracking, person tracking)
- Audio modelling (multi-speaker, room acoustics, noise conditions, VAD)
- Gesture / embodiment testing: Working with Controls/ML, create infrastructure to validate that Abi’s body gestures and animations are appropriate, synchronised and safe in real and simulated interactions.
Own Quality, Metrics and Regression
- Custodian of quality metrics: If they don’t exist, work with stakeholders to elicit use cases, derive requirements, and define measurable quality metrics for navigation, conversational AI, audio, perception and gesture.
- Formalise requirements and traceability: Capture requirements and trace them through to test cases and automated regression suites.
- Analyse and improve: Build dashboards, tools and analysis pipelines to mine test and simulation data, identify bugs, track performance over time, and feed actionable insights back to engineering teams.
Scale to Synthetic Data & ML Training
- Extend test environments into training data generation pipelines, working closely with character and autonomy teams.
- Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac) to generate high-fidelity synthetic interaction data at scale for:
- Character animation and gesture models
- Perception models (vision, audio, social awareness)
- Navigation & planning in human-centred environments
- Enable ML-generated gesture and navigation behaviours to augment hand-crafted workflows, and help validate them in rich, multi-actor simulated scenarios.
Requirements
Ideally You Have
Education
- Bachelor’s degree in Computer Science, Robotics, Engineering, or a related field (Master’s preferred but not required).
Experience
- ~5+ years of professional experience building simulation and/or test infrastructure for complex systems such as robots, autonomous vehicles, drones, conversational AI systems, perception systems or embodied AI.
- Strong programming proficiency in Python (essential) and C++ (valuable).
- Hands-on experience with at least one or more robotics simulation platforms e.g.:
- NVIDIA Isaac Lab / Isaac Sim, Gazebo, CARLA, AirSim, or similar
- Physics engines such as PhysX, MuJoCo, etc.
- Solid understanding of core robotics principles: kinematics, dynamics, perception and control.
- Experience testing AI/ML systems, ideally at least one or more of:
- LLM-based or voice-based conversational systems
- Audio/speech pipelines
- Computer vision or perception models
- Embodied / interactive AI behaviours e.g. autonomous systems
- Experience with testing frameworks and CI/CD tools (e.g. pytest, Jenkins, GitHub Actions, GitLab CI) for automated regression testing.
- Familiarity with ML evaluation metrics and basic experimental design.
- Demonstrated strength in requirements gathering, documentation and traceability from requirements → test cases → pass/fail criteria.
- A proactive, first-principles thinker who is excited by the prospect of owning a critical system at an early-stage startup.
Bonus Points
You don’t need all of these, but some of the following would be a strong plus:
- Experience using simulation for ML applications, such as reinforcement learning, imitation learning, or synthetic data generation.
- Experience with character animation systems, motion capture pipelines or gesture generation for embodied agents.
- Strong experience with 3D modelling, game engines and content generation (Unity, Unreal Engine).
- Knowledge of sensor modelling techniques for cameras, LiDAR and audio (microphone arrays, room acoustics).
- Experience building and managing large-scale, cloud-based simulation or test infrastructure.
- Experience with ROS/ROS2 integration in human-rated environments or regulated domains; exposure to standards such as ISO 13482 for personal care robots.
- Experience working with robots or autonomous systems in human-centric environments (healthcare, aged care, hospitality, etc.).
Benefits
The salary for this position may vary depending on factors such as job-related knowledge, skills, and experience. The total compensation package may also include additional benefits or components based on the specific role. Details will be provided if an employment offer is made.
If you’re excited about this role but don’t meet every requirement, that’s okay — we encourage you to apply. At Andromeda Robotics, we celebrate diversity and are committed to creating an inclusive environment for all employees. Let’s build the future together.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Please note: At this time, we are generally not offering visa sponsorship for this role.
Sponsored
Explore Engineering
Skills in this job
People also search for
Similar Jobs
More jobs at Andromeda Robotics
Simulation & Test Engineer (US Based)
Andromeda Robotics
Senior Machine Learning Engineer (Computer Vision)
Andromeda Robotics
Principal Electronics Engineer
Andromeda Robotics
Autonomy Engineer, Robotics Behaviors
Andromeda Robotics
Autonomy Engineer, Robotics Behaviours
Andromeda Robotics
Apply for this position
Sign In to ApplyAbout Andromeda Robotics
At Andromeda Robotics, we’re not just imagining the future of human-robot relationships—we’re building it. As a pioneering social robotics startup, our mission is to revolutionise how humans connect with technology by introducing Abi, the world’s first...
Similar Jobs
More jobs at Andromeda Robotics
Simulation & Test Engineer (US Based)
Andromeda Robotics
Senior Machine Learning Engineer (Computer Vision)
Andromeda Robotics
Principal Electronics Engineer
Andromeda Robotics
Autonomy Engineer, Robotics Behaviors
Andromeda Robotics
Autonomy Engineer, Robotics Behaviours
Andromeda Robotics