About SPAICE
SPAICE is building the Autonomy Operating System that empowers satellites and drones to navigate, perceive, and interact with the world — regardless of the environment.
From GPS-denied zones on Earth to the unexplored frontiers of space, our Spatial AI delivers unmatched autonomy, resilience, and adaptability.
At SPAICE, you’ll work on real missions alongside leading aerospace and defense contractors, shaping the future of space and autonomous systems. If you want your work to have a tangible impact — this is where it happens.
About the Role
In this internship, you’ll develop software that flies, not code that sits on a shelf.
As a Computer Vision Engineer Intern, you’ll join the Perception Team to design and implement algorithms that fuse data from cameras, LiDAR, radar, and event sensors, turning research into flight-ready systems for space and defense missions.
Satellites that detect and avoid threats autonomously.
Drones that collaborate in GPS-denied environments.
Spacecraft that rendezvous with tumbling targets in orbit.
These are the challenges you’ll help solve.
What You’ll Work On
Develop perception pipelines for situational awareness, collision avoidance, formation flying, surveillance, and terrain mapping.
Build the Perception stack of SPAICE’s Spatial AI, fusing visual, inertial, and depth information for robust scene understanding.
Integrate sensor fusion and neural representations to create dense onboard world models running in real time on resource-constrained hardware.
Deploy semantic scene understanding, pose estimation, depth estimation, and place recognition on embedded or edge-AI processors.
-
Collaborate with SPAICE’s Computer Vision scientists and cross-disciplinary engineers, delivering well-tested, high-performance code into HIL/SIL setups and real missions.
What We’re Looking For
Bachelor’s or Master’s student in Computer Science, Robotics, or a related field.
-
Knowledge in at least two of the following:
Multimodal perception & sensor fusion
Neural representations
Semantic scene understanding
SLAM / camera-pose estimation
Monocular depth estimation
Visual place recognition
Strong skills in C++ and Python, with experience developing performance-critical CV/ML code on Linux or embedded platforms.
-
Passion for turning research into real-world autonomy systems.
Perks & Benefits
Competitive compensation
Work on flight-grade autonomy software with industry leaders
Fast-paced, high-impact environment with genuine ownership
Sponsored