About Constellation
Constellation is an independent research center that brings together people throughout the AI safety ecosystem to accelerate insight, research, and talent through better cooperation. Unlike a conference or summit, Constellation operates continuously, in the form of a physical workspace as well as conference-style talks, workshops, and training bootcamps. Continuous operation allows for relationships and conversations to develop over time, meaningfully fostering trust, collaboration, and shared insight. It also makes Constellation a natural field-building hub by rapidly inspiring, orienting, and connecting people who are newer to the field.
Based in Berkeley, CA, our shared workspace hosts over 200 people per week across dozens of AI safety organizations in nonprofits, academia, industry, and government. Hundreds of other researchers spend time at Constellation for shorter visits each year. We believe this is the strongest and highest-output network of AI safety researchers in the world; dozens of participants in past Constellation programs have gone on to safety-focused roles at companies such as METR, Redwood Research, Anthropic, Google DeepMind, OpenAI, and the US and UK Safety Institutes.
About this role
As Program Manager on the Mobilization team, you'll own scoped programs and workstreams that help get the right talent into AI safety roles. You'll report to the Talent Mobilization Lead and work closely with Research Program Managers, the Program Coordinator, fellows, and partners across the ecosystem.
This is a role for someone who can deliver independently on moderate-complexity work, take ownership of programs from start to finish, and build trusted relationships across a range of stakeholders. You'll notice things that aren't working and have a clear path to raising and testing improvements. Overall, you'll have a defined scope with growing autonomy as you demonstrate impact and you'll contribute to shaping how the team evolves over time.
Key responsibilities
These are the functions we've identified so far — but we expect the right person to come in, spot gaps we've missed, and make this role their own.
Fellowship Program Operations
Run the logistics and coordination of fellowship application cycles within frameworks established by the Talent Mobilization Lead. Improve repeatability and quality cohort over cohort — identifying friction, proposing changes, and implementing solutions within your scope. You'll notice things that aren't working and have a clear path to raising and testing improvements.
Talent Placement SupportSupport placement of fellows and other talent into roles across the AI safety ecosystem — full-time, contract, advisory, and project-based. Help individuals understand their options, facilitate warm introductions to hiring organizations, and coordinate hand-offs.
Program Resources & User Research
Build and maintain guides, frameworks, and curated resources that help people navigate the AI safety ecosystem. Conduct structured user interviews and synthesize feedback to inform program improvements. Recommend changes to the Talent Mobilization Lead for review and prioritization.
Systems & Process Improvement
Manage operational systems (tracking, documentation, workflows) within established tooling. Identify recurring process gaps and propose practical solutions. Contribute to building repeatable processes across sourcing, assessment, and placement.
Partner Coordination
Coordinate with field-building organizations (e.g., 80,000 Hours, BlueDot) on shared logistics and initiatives. Build familiarity with hiring organizations' needs and surface relevant placement opportunities.
Roles will also flex based on individual strengths and experience. For example, one program manager might focus primarily on placement and matchmaking rather than owning the application cycle. What matters is that responsibilities are clearly scoped and owned — not that every person in a given role does exactly the same work.
Skills & experience
- Bring 2–5 years of experience in program management, consulting, talent, or a related field
- Can own work from start to finish with some supervision; you seek input on high-stakes decisions but operate independently on day-to-day execution
- Have strong interpersonal judgment and can build trust quickly across a wide range of people and contexts
- Think in systems but stay grounded in execution; you don't lose the thread on details
- Communicate clearly and warmly, from early-career fellows to senior researchers
- Think AI might have transformative effects in the coming years and want to help build the infrastructure to navigate that well
Bonus experience:
- Familiarity with the AI safety, EA, or research communities
- Experience with career advising, talent assessment, or placement
- Background in user research or program evaluation
- Familiarity with tools like Airtable, Asana, or similar
Additional Information
This is a full-time, on-site position at our Berkeley office, just steps from the nearest BART (metro) and bus stop. On-site parking is also available.
The ideal candidate for this role will have some combination of the skills and experiences described above. If you are not sure if you are qualified, we strongly encourage you to apply anyway. Beyond the qualifications outlined, our priority is building a team that will help humanity safely navigate the development of transformative AI. If you would be excited to do this work, we'd love to consider you.
We value diversity in all respects and base our hiring decisions on the needs of the organization and individual qualifications. We welcome applicants from all backgrounds, regardless of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age or disability.