Senior Data Engineer
About Checkr
Checkr is building the data platform to power safe and fair decisions. Established in 2014, Checkr’s innovative technology and robust data platform help customers assess risk and ensure safety and compliance to build trusted workplaces and communities. Checkr has over 100,000 customers including DoorDash, Coinbase, Lyft, Instacart, and Airtable.
We’re a team that thrives on solving complex problems with innovative solutions that advance our mission. Checkr is recognized on Forbes Cloud 100 2025 List and is a Y Combinator 2024 Breakthrough Company.
About the team/role
We're seeking an experienced Senior Data Engineer to join our Data Platform team. This team plays a crucial role in our mission by developing and maintaining scalable data platforms for fair and safe hiring decisions.
As a Staff Data Engineer on the Data Platform team, you'll work on Checkr’s centralized data platform, critical for the company's vision. The centralized data platform is the heart of all key customer-facing products. You will work on high-impact projects that directly contribute to the next generation of products.
What you’ll do:
- Be an independent individual contributor who can solve problems and deliver high-quality solutions with minimal/high-level oversight and a high level of ownership
- Bring a customer-centric, product-oriented mindset to the table - collaborate with customers and internal stakeholders to resolve product ambiguities and ship impactful features
- Partner with engineering, product, design, and other stakeholders in designing and architecting new features
- Experimentation mindset - autonomy and empowerment to validate a customer need, get team buy-in, and ship a rapid MVP
- Quality mindset - you insist on quality as a critical pillar of your software deliverables
- Analytical mindset - instrument and deploy new product experiments with a data-driven approach
- Deliver performant, reliable, scalable, and secure code for a highly scalable data platform
- Monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
What you bring:
- Bachelor's degree in a computer-related field or equivalent work experience
- 6-7+ years of development experience in the field of data engineering (5+ years writing PySpark)
- Experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
- Experience with ETL/ELT, stream and batch processing of data at scale
- Strong proficiency in PySpark, Python, and SQL
- Expertise in understanding database systems, data modeling, relational databases, NoSQL (such as MongoDB)
- Experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
- Knowledge of security best practices and data privacy concerns
- Strong problem-solving skills and attention to detail
Nice to have:
- Experience/knowledge of data processing platforms such as Databricks or Snowflake.
- An understanding of Graph and Vector data stores (preferred)
What you get:
- A collaborative and fast-moving environment
- Be part of an international company based in the United States
- Learning and development reimbursement allowance
- Competitive compensation and opportunity for professional and personal advancement
- 100% medical, dental, and vision coverage for employees and dependents
- Reimbursement for work from home equipment
Pay Transparency Disclosure
One of Checkr’s core values is Transparency. To live by that value, we’ve made the decision to disclose salary ranges in all of our job postings. We use geographic cost of labor as an input to develop ranges for our roles and as such, each location where we hire may have a different range. If this role is remote, we have listed the top to the bottom of the possible range, but we will specify the target range for an exact location when you are selected for a recruiting discussion. For more information on our compensation philosophy, see our website.
At Checkr, we believe a hybrid work environment strengthens collaboration, drives innovation, and encourages connection. Our hub locations are Denver, CO, San Francisco, CA, and Santiago, Chile. Individuals are expected to work from the office 2 to 3 days a week. Starting January 2026, hub-based employees will be expected to work from the office 3 days per week. In-office perks are provided, such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages.
Equal Employment Opportunities at Checkr
Checkr is committed to building the best product and company, which requires hiring talented and qualified individuals with a diverse set of perspectives and lived experiences. Checkr believes in hiring people of all backgrounds, including those whose histories are impacted by the justice system in accordance with local, state, and/or federal laws, including the San Francisco’s Fair Chance Ordinance.
*Legitimate Checkr emails will always include our official domain name after the @ symbol (e.g., name@checkr.com).
Similar Jobs
Senior Data Engineer, Data Platform
Recursion
Senior Data Engineer, Data Curation
Formation Bio
Senior Data Engineer II - Data Curation
Formation Bio
Senior Data Engineer - Platform Data and Analytics
Faire
Senior Data Engineer, Corporate Data and Analytics
Lyft
Similar Jobs
Senior Data Engineer, Data Platform
Recursion
Senior Data Engineer, Data Curation
Formation Bio
Senior Data Engineer II - Data Curation
Formation Bio
Senior Data Engineer - Platform Data and Analytics
Faire
Senior Data Engineer, Corporate Data and Analytics
Lyft