Senior Data Engineer
Important Information
Experience: +5 years
Job Mode: Full-time
Work Mode: Work from home
Job Summary
We are seeking a Senior Data Engineer with strong experience in designing, building, and orchestrating data pipelines using modern data platform technologies. The ideal candidate has a deep understanding of scalable data architectures and is passionate about enabling data-driven decision-making across the organization.
This role requires hands-on expertise with tools such as Spark, Airflow, and Redshift, and a mindset focused on automation, scalability, and reliability. Candidates who bring a DevOps-oriented approach to data engineering—streamlining deployment, monitoring, and maintenance—will thrive in this environment.
Responsibilities and Duties
-
Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and application data needs.
-
Orchestrate complex data workflows and scheduling using Apache Airflow or similar tools.
-
Work with large-scale data processing frameworks such as Apache Spark to clean, transform, and aggregate data.
-
Build and manage data storage and warehousing solutions using Amazon Redshift or comparable platforms.
-
Ensure data quality, lineage, and consistency across multiple sources and destinations.
-
Collaborate closely with Data Analysts, Data Scientists, and other Engineers to deliver high-quality data solutions.
-
Implement best practices for data pipeline monitoring, testing, and version control.
-
Optimize data systems for performance, scalability, and cost efficiency.
Qualifications and Skills
-
5+ years of experience in data engineering or a related field.
-
Proven experience with data pipeline orchestration tools (e.g., Airflow, Luigi, Prefect).
-
Hands-on experience with Spark for large-scale data processing.
-
Strong proficiency with SQL and data warehouse design (preferably on AWS Redshift or similar).
-
Solid understanding of ETL/ELT frameworks, data modeling, and data lifecycle management.
-
Experience with cloud environments (preferably AWS) and related services.
-
Strong programming skills in Python or Scala.
-
Excellent problem-solving skills and the ability to work independently in a fast-paced environment.
-
Strong communication and collaboration abilities across technical and non-technical teams.
Nice to have
-
Experience applying DevOps principles (automation, CI/CD, monitoring, infrastructure as code) to data engineering.
-
Familiarity with containerization technologies (e.g., Docker, Kubernetes).
-
Exposure to real-time data streaming tools (e.g., Kafka, Kinesis).
-
Knowledge of data observability and data governance frameworks.
About Encora
Encora is a global company that offers Software and Digital Engineering solutions. Our practices include Cloud Services, Product Engineering & Application Modernization, Data & Analytics, Digital Experience & Design Services, DevSecOps, Cybersecurity, Quality Engineering, AI & LLM Engineering, among others.
At Encora, we hire professionals based solely on their skills and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.