Department: Technical
Work Arrangement: Remote
Job Type: Independent Contractor, Full Time
Work Schedule: US Time Zone (candidate expected to be flexible with the client's preference)
Locations: Philippines, LATAM, and other remote regions with excellent English communication skills
About Pearl Talent
Pearl works with the top 1% of candidates from around the world and connects them with the best startups in the US and EU. Our clients have raised over $5B in aggregate and are backed by companies like OpenAI, a16z, and Founders Fund. They're looking for the sharpest, hungriest candidates who they can consistently promote and work with over many years. Candidates we've hired have been flown out to the US and EU to work with their clients, and even promoted to roles that match folks onshore in the US.
Hear why we exist, what we believe in, and who we're building for: WATCH HERE
Why Work with Us?
At Pearl, we're not just another recruiting firm—we connect you with exceptional opportunities to work alongside visionary US and EU founders. Our focus is on placing you in roles where you can grow, be challenged, and build long-term, meaningful careers.
Role Overview
The Backend Engineer designs, builds, and scales server-side systems, data infrastructure, and integration pipelines that power modern applications across diverse industries including PropTech, AI-powered platforms, SaaS products, and data-intensive startups. This is often a foundational hire where you'll have true ownership in shaping backend architecture, working directly with technical leadership and cross-functional teams to build systems from the ground up. You'll architect scalable cloud infrastructure, implement robust ETL/ELT pipelines ingesting large data streams, manage search and AI integrations, and ensure reliability, performance, and security across all backend systems. This role requires engineers who combine deep technical expertise with systems thinking, building infrastructure that supports real-time product discovery, generative AI experiences, and high-volume data processing.
Your Impact
Your backend architecture will form the foundation enabling applications to scale from initial launch to millions of users without performance degradation. By building robust data pipelines and integration systems, you'll unlock value from diverse data sources, transforming raw data into actionable intelligence. Your infrastructure decisions will directly impact system reliability, cost efficiency, and development velocity across engineering teams. Through thoughtful API design and integration architecture, you'll enable seamless connections between systems, third-party services, and AI capabilities. Your work on search infrastructure and vector databases will power intelligent product discovery and AI-driven experiences. By establishing engineering best practices, monitoring systems, and security protocols, you'll create operational excellence that prevents outages and protects sensitive data.
Core Responsibilities
Backend Architecture & Cloud Infrastructure (30%)
- Design and implement scalable backend architecture supporting high-traffic applications
- Set up and manage cloud infrastructure including VPCs, IAM, security groups, and networking (AWS preferred, GCP/Azure)
- Implement Infrastructure as Code (IaC) using Terraform, Pulumi, or CloudFormation
- Define backend architecture standards, API design patterns, and system integration strategies
- Build cost-optimized cloud strategies balancing performance and infrastructure spend
- Design microservices architecture or monolithic systems based on business requirements
- Implement containerization strategies using Docker and orchestration with Kubernetes
- Establish monitoring, logging, and observability frameworks for system health tracking
- Ensure high availability, disaster recovery, and backup strategies
Server-Side Development & API Design (25%)
- Develop scalable server-side applications using Node.js, TypeScript, Python, or Golang
- Build robust RESTful APIs and GraphQL endpoints with proper error handling and documentation
- Implement authentication, authorization, and security protocols (OAuth, JWT, API keys)
- Optimize backend performance for speed, efficiency, and resource utilization
- Design database schemas and queries for SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases
- Build real-time features using WebSockets, Server-Sent Events, or streaming technologies
- Implement caching strategies using Redis or Memcached for performance optimization
- Write clean, maintainable, well-tested backend code following best practices
Data Integration & ETL/ELT Pipelines (25%)
- Build and scale ETL/ELT pipelines ingesting large data streams from diverse sources (REST, XML, EDI APIs)
- Design resilient, idempotent data loaders that evolve with external data source changes
- Deploy and manage orchestration tools (Airflow, Dagster, Prefect) for workflow automation
- Build and maintain dbt pipelines with automated testing, CI/CD workflows, and version control
- Implement data transformation logic ensuring data quality, validation, and error handling
- Integrate streaming data platforms (Kafka, Kinesis, RabbitMQ) for real-time data processing
- Design schema-flexible pipelines handling varying data structures gracefully
- Manage data warehousing solutions (Snowflake, BigQuery, Redshift) for analytics and reporting
Search, AI & Advanced Integrations (15%)
- Manage search infrastructure including Elasticsearch or OpenSearch clusters for product discovery
- Implement vector databases (Pinecone, Weaviate, Qdrant) supporting AI/ML features
- Support AI integrations through embeddings, RAG (Retrieval-Augmented Generation), and LLM frameworks
- Integrate with AI frameworks (LangChain, LlamaIndex) for intelligent application features
- Build feature stores and model ops infrastructure supporting ML model deployment
- Implement recommendation systems and semantic search capabilities
- Optimize search relevance, ranking algorithms, and query performance
- Support generative AI experiences with proper prompt engineering and context management
DevOps, Testing & System Reliability (5%)
- Implement CI/CD pipelines automating testing, building, and deployment processes
- Establish automated testing frameworks (unit, integration, end-to-end) ensuring code quality
- Monitor system performance using DataDog, Grafana, Prometheus, or CloudWatch
- Implement security best practices including data encryption, secure API design, and vulnerability scanning
- Optimize database performance through query optimization, indexing, and connection pooling
- Manage system scaling strategies (horizontal, vertical) based on load patterns
- Conduct code reviews and maintain engineering documentation
- Ensure data quality, observability, and operational excellence
Requirements
Must-Haves (Required)
- Experience: 1-3+ years designing and building scalable backend systems and server-side applications
- Programming Languages: Strong proficiency in Node.js, TypeScript, Python, or Golang
- Cloud Platforms: Hands-on experience with AWS (preferred), GCP, or Azure including infrastructure setup
- Database Expertise: Experience with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases
- API Development: Proven experience building RESTful APIs with proper architecture and documentation
- ETL/ELT Pipelines: Experience building data integration pipelines and handling diverse data sources
- Infrastructure as Code: Familiarity with Terraform, Pulumi, CloudFormation, or similar IaC tools
- System Design: Strong understanding of distributed systems, scalability patterns, and architecture principles
- Problem-Solving: Excellent debugging, troubleshooting, and analytical skills
- Code Quality: Commitment to clean code, testing, and engineering best practices
- Communication: Strong written and verbal English with ability to explain technical concepts clearly
- Collaboration: Experience working cross-functionally with product, frontend, and data teams
Nice-to-Haves (Preferred)
- 5+ years of backend engineering experience including ETL/ELT pipeline development
- Experience building systems from scratch as foundational engineering hire
- Hands-on experience with orchestration tools (Airflow, Dagster, Prefect)
- Knowledge of streaming platforms (Kafka, Kinesis, RabbitMQ)
- Experience with data warehousing (Snowflake, BigQuery, Redshift)
- Proficiency with search engines (Elasticsearch, OpenSearch, Algolia)
- Experience with vector databases (Pinecone, Weaviate, Qdrant) and embeddings
- Knowledge of ML/AI fundamentals including feature stores and model ops
- Familiarity with AI frameworks (LangChain, LlamaIndex) for RAG implementations
- Experience with dbt (data build tool) for data transformation workflows
- Understanding of PropTech, real estate technology, or property management systems
- Experience with Next.js, Express.js, NestJS, or similar backend frameworks
- Knowledge of GraphQL API design and implementation
- Familiarity with message queues and event-driven architectures
- Experience with containerization (Docker) and orchestration (Kubernetes)
Tools Proficiency
Must-Haves (Required)
- Programming: Node.js, TypeScript, Python (strong proficiency in at least two)
- Cloud Platforms: AWS (EC2, S3, Lambda, RDS, VPC), GCP, or Azure
- Databases: PostgreSQL, MySQL, MongoDB, or equivalent SQL/NoSQL systems
- Version Control: Git, GitHub, GitLab for code management
- API Development: RESTful API design, Postman or similar testing tools
Nice-to-Haves (Preferred)
- Infrastructure as Code: Terraform, Pulumi, CloudFormation
- Orchestration: Apache Airflow, Dagster, Prefect for workflow management
- Streaming: Apache Kafka, AWS Kinesis, RabbitMQ
- Data Warehousing: Snowflake, Google BigQuery, AWS Redshift
- Search Engines: Elasticsearch, OpenSearch, Algolia
- Vector Databases: Pinecone, Weaviate, Qdrant, Milvus
- AI Frameworks: LangChain, LlamaIndex for LLM applications
- Data Transformation: dbt (data build tool) for analytics engineering
- Caching: Redis, Memcached for performance optimization
- Monitoring: DataDog, Grafana, Prometheus, CloudWatch
- CI/CD: GitHub Actions, CircleCI, Jenkins, GitLab CI
- Containerization: Docker, Kubernetes, ECS/EKS
- Backend Frameworks: Express.js, NestJS, FastAPI, Next.js API routes
- Testing: Jest, Pytest, Mocha for automated testing
- Message Queues: SQS, SNS, RabbitMQ for async processing
Benefits
- Competitive Salary: Based on experience and skills
- Remote Work: Fully remote — work from anywhere
- Generous PTO: In accordance with company policy
- Direct Mentorship: Access to global industry leaders
- Learning & Development: Continuous growth resources
- Global Networking: Work with international teams
- Health Coverage (Philippines only): HMO after 3 months (full-time)
Our Recruitment Process
- Application
- Skills Assessment
- Initial Screening
- Top-grading Interview
- Client Matching
- Job Offer
- Onboarding
Ready to Join Pearl Talent?
If you're a driven professional ready to work with exceptional founders building the next generation of world-class companies, we'd love to meet you. Apply now to unlock opportunities where your growth, impact, and success are our top priorities.
Sponsored
Explore Engineering
Skills in this job
People also search for
Similar Jobs
More jobs at Pearl
Apply for this position
Sign In to ApplyAbout Pearl
Pearl Talent is a US-based start-up that helps the top 1% of talent worldwide land long-term roles at fast-growing companies in the US and EU. Apply now and take the next step in your career!