Who we are
Yubo is the Social Discovery app to make new friends and hang out online. By eliminating likes and follows, we empower our users to create genuine connections and show up as their true selves.
We've pioneered a new way for Gen Z to socialize online, and with millions of active users, our goal is to redefine how we connect today and tomorrow.
Our team is international, multicultural and deeply committed to its mission. As the leading platform to socialize online, we have a special responsibility to build a safe digital space for our community. Safety is embedded in our DNA, and our proactive approach focuses on user protection, support, and education. We also work closely with the broader technology industry to share our knowledge and NGOs create industry-leading child protection standards.
Join us in this exciting journey and help us shape the future of social interactions!
About this role:
We’re looking for a Senior Data Engineer who thrives in a fast-moving environment where experimentation is the norm, not the exception. You won’t be locked into a single squad. Instead, you’ll work across initiatives, especially supporting our product squads as they rapidly prototype and deploy new data models.
This role is embedded in the Infrastructure team, which means real ownership. You’ll help design the data models from scratch and see them through production. You will work hand in hand with DevOps to tune databases and servers hosting them, so you can do what you do best: clean, scalable, robust and cost-efficient pipelines (That’s part of how we’re profitable !)
As one of the main contributors during a feature kick-off, you’ll contribute to the design, challenge proposals, and make sure what gets built is actually buildable.
If you enjoy jumping into fast-paced projects, debating architecture with backend devs, and pushing data models that scale under serious load, this is your place.
Your responsibilities:
Build and maintain high-throughput data pipelines for batch and streaming workloads.
Co-design data models and validate them with backend teams during the kickoff phase.
Ensure sustainable architecture and data integrity and educate other stakeholders throughout Yubo on those challenges.
Work closely with ML engineers to provide clean, ML-ready datasets.
Ensure data quality, availability, and order of entry at volumes of 300k+ db entries per second, and engineer ways to scale that even further.
Document pipelines, transformations, and architecture clearly for future devs and your future self (you’ll thank yourself later)
Tools we use:
Data streaming / processing: Kafka / Streams / Connect
Data: MongoDB, Couchbase, KeyDB, DragonFly, Elasticsearch, Iceberg, Trino
Monitoring: Datadog & Grafana
Automation: Kubernetes operators
Who you are:
You’ve built and scaled production data pipelines for more than 7 years.
You have strong Kafka expertise and deep knowledge of parallelization and event ordering for large volumes of data (millions of events per day)
You are proficient in SQL and Python
You’re comfortable owning the data model from architecture doc to deployment.
You love pragmatic problem-solving and know how to balance meeting users’ needs with performance, cost effectiveness and reliability of high throughput systems.
You feel comfortable working mainly in English, both written and spoken
Within a month, you will:
Familiarize yourself with our data landscape, tooling, and current pipelines.
Get acquainted with product squads you will work with.
Be involved in a feature kick-off and provide feedback on the target data model
Within 3 months, you will:
Lead the data pipeline work for a new feature experiment.
Partner with data scientists to deliver well-structured datasets for a new ML model.
Present findings and documentation that make future development easier.
Within 6 months, you will:
Architect a high-volume pipeline from concept to production.
Drive major improvements in event ingestion performance or reliability.
Mentor a new teammate or contribute to internal best practices.
The recruitment process
Phone screen with a TAM
First interview with Mikael, our Head of Infra
Technical test and debrief with the Data Engineering team
Culture Fit interviews
Sponsored