- Implement data transformation, aggregation, and enrichment processes to support various data analytics and machine learning initiatives
- Collaborate with cross-functional teams to understand data requirements and translate them into effective data engineering solutions
- Ensure data quality and integrity throughout the data processing lifecycle
- Design and deploy data engineering solutions on OpenShift Container Platform (OCP) using containerization and orchestration techniques
- Optimize data engineering workflows for containerized deployment and efficient resource utilization
- Collaborate with DevOps teams to streamline deployment processes, implement CI/CD pipelines, and ensure platform stability
- Implement data governance practices, data lineage, and metadata management to ensure data accuracy, traceability, and compliance
- Monitor and optimize data pipeline performance, troubleshoot issues, and implement necessary enhancements
- Implement monitoring and logging mechanisms to ensure the health, availability, and performance of the data infrastructure
- Document data engineering processes, workflows, and infrastructure configurations for knowledge sharing and reference
- Stay updated with emerging technologies, industry trends, and best practices in data engineering and DevOps
Requirements
- Bachelor's degree in Computer Science, Information Technology, or a related field
- At least 6 years of experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments
- Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
- Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
- Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
- Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
- Experience with jobs schedulers like Control-m
- Experience with Graphana, Prometheus, Splunk will be an added benefit
- Quantexa exposure and/or certification a strong plus
- Strong problem-solving and troubleshooting skills with a proactive approach to resolving technical challenge
Sponsored
Explore Data
Skills in this job
People also search for
Similar Jobs
More jobs at Unison Group
Sponsored
Apply for this position
Sign In to ApplyAbout Unison Group
Unison Consulting was launched in Singapore on September 2012, the hub of the financial industry, with innovative visions in the technocratic arena. We are a boutique next-generation Technology Company with strong business-interests in Liquidity risk, ...
Category:
Data