Apolloresearch

Full Stack Engineer (Monitoring)

Apolloresearch London Today
engineering
Application deadline: We accept submissions until 16 January 2026. We review applications on a rolling basis and encourage early submissions.

THE OPPORTUNITY

Join our new AGI safety monitoring team and help transform complex AI research into practical tools that reduce risks from AI. As a Full Stack Engineer, you'll work closely with our CEO, monitoring engineers and Evals team software engineers to build tools that make AI agent safety accessible at scale. We are building tools that monitor AI coding agents for safety and security failures. You will join a small team and will have significant ability to shape the team & tech, and have the ability to earn responsibility quickly. 

You will like this opportunity if You care about building tools that genuinely make AI agents safer and you thrive in high-paced environments and enjoy closely working with researchers.

KEY RESPONSIBILITIES

Tool Development
- Collaborate with the team lead to refine tool requirements, and translate them into technical specifications and architectures
- Design, develop, and maintain features across the full stack, from responsive front-end interfaces (e.g. using React) to scalable backend services in Python
- Write clean, well-tested, and maintainable code that meets our high standards for security and performanceTalk to users to understand their use-cases and needs

Back-End Development
- Design and implement scalable backend services that can process and analyze large volumes of AI agent logs
- Implement RESTful APIs and data processing pipelines for AI agent logs
- Implement secure API endpoints that allow users to integrate our tools into their workflows
- Build monitoring and logging systems to ensure reliability and performance

Front-End Development
- Build intuitive user interfaces that help users understand and act on AI agent safety & security evaluations
- Develop interactive data visualizations that effectively communicate complex AI behaviors, e.g. hour-long coding agent trajectories
- Ensure responsive design and cross-browser compatibility for users
- Implement authentication, authorization, and other security measures to protect sensitive users data

Collaboration & Communication
- Work side-by-side with our researchers and software engineers. We’re “our own customer” since our researchers use our tools
- Participate in code reviews to maintain high code quality
- Document technical decisions and implementations to facilitate knowledge sharing
- Contribute to technical discussions about architecture, technology choices, and implementation approaches
- Help the monitoring team grow by sharing your own knowledge, offer thoughtful feedback and participate in trade off discussions

JOB REQUIREMENTS

  • 3+ years experience as a Full Stack Engineer building production applications
  • Strong proficiency in front-end development, e.g. JavaScript/TypeScript or ReactSolid
  • Python experience for backend development with frameworks like FastAPI, Flask, or Django
  • Experience designing RESTful APIs and working with databases, e.g. using SQL
  • Familiarity with cloud services and containerization technologies, e.g. Docker, AWS, Google Cloud
  • Strong problem-solving skills and ability to learn new technologies quickly

  • Bonus:
  • Has taken a product from 0 to 1, from conception to production
  • Has been responsible for the development of a tool, e.g. leading engineer
  • Familiarity with ML/AI concepts or developer tools for AI applications
  • Experience with evaluation frameworks for AI systems
  • Previous work on developer tools or APIs, e.g. B2B SaaS tools
  • Experience building data streaming, data processing or observability tools
  • Experience building software for security use cases.

  • We want to emphasize that people who feel they don’t fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. 

    WHAT YOU'LL ACCOMPLISH IN YOUR FIRST YEAR

  • Develop and scale at least one monitoring tool. We have an MVP of our AI agent safety & security monitoring tool and now want to scale it from 0.5 to 1 and beyond.
  • Build scalable, production-ready infrastructure for processing and analyzing AI agent trajectories, ensuring our systems can handle high throughput from our users, in a secure and stable manner.
  • Create intuitive visualizations and dashboards that translate complex AI safety metrics into actionable insights for technical (e.g. CISOs) and non-technical stakeholders alike.
  • REPRESENTATIVE PROJECT

  • AI agent real-time monitoring system: AI agents are already deployed at scale. Often they are not or only barely monitored, missing critical failures. The natural response to this is to build monitors that constantly scale agent outputs and alert developers and/or security teams about potential risks. We will cover hundreds of failure modes in AI safety and security and build out many kinds of monitors (e.g. hierarchical, ensembles, agentic). 
  • BENEFITS

  • Salary: 100k - 180k GBP (~135k - 245k USD)
  • Flexible work hours and schedule
  • Unlimited vacation
  • Unlimited sick leave
  • Lunch, dinner, and snacks are provided for all employees on workdays
  • Paid work trips, including staff retreats, business trips, and relevant conferences
  • A yearly $1,000 (USD) professional development budget
  • LOGISTICS

  • Start Date: Target of 2-3 months after the first interview
  • Time Allocation: Full-time
  • Location: The office is in London, and the building is next to the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis.
  • Work Visas: We can sponsor UK visas
  • ABOUT THE TEAM

    The monitoring team is a new team. Especially early on, you will work closely with Marius Hobbhahn (CEO), Jeremy Neiman (engineer) and others on the monitoring team. You’ll also sometimes work with our SWEs, Rusheb Shah, Andrei Matveiakin, Alex Kedrik, and Glen Rodgers to translate our internal tools into externally usable tools. Furthermore you will interact with our researchers, since we intend to be “our own customer” by using our tools internally for our research work. You can find our full team here.

    ABOUT APOLLO

    The rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks. At Apollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. We work on the detection of scheming (e.g., building evaluations), the science of scheming (e.g., model organisms), and scheming mitigations (e.g., anti-scheming and control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment or collaborate on scheming mitigations. 

    We’re now also developing tools  that make it easier to prevent harms from AI systems widely deployed AI systems. We specifically target coding agent safety since coding agents are the most advanced agents and tasked with high-stakes decisions.

    At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.

    Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.

    HOW TO APPLY

    Please complete the application form with your CV. The provision of a cover letter is neither  required nor encouraged. Please also feel free to share links to relevant work samples.

    About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 3 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest getting familiar with the evaluations framework Inspect, or by building simple monitors for coding agents and running them on your own Claude Code / Cursor / Codex / etc. traffic.

    Your Privacy and Fairness in Our Recruitment Process: We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency, we use AI-powered tools to assist with tasks such as resume screening. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks. Your personal data is handled securely and transparently. We adopt a human-centred approach: all resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at info@apolloresearch.ai.