Position Summary
The Data Engineer supports the design, development, and maintenance of scalable data infrastructure that enables efficient data collection, transformation, storage, and access across the organization. This role works closely with data engineers, analysts, and cross-functional stakeholders to build reliable data pipelines and support data-driven decision making. The Data Engineer I contributes to the implementation of modern data engineering solutions and helps ensure data quality, performance, and scalability within a fast-paced fintech or cryptocurrency environment.
Essential Duties & Responsibilities
- Design, develop, and maintain scalable data pipelines that support ingestion, transformation, and storage of structured and unstructured data.
- Build and maintain data ingestion processes from multiple sources including APIs, third-party platforms, and internal systems.
- Collaborate with data engineers, analysts, and product teams to understand data requirements and deliver reliable data solutions.
- Monitor, optimize, and troubleshoot existing data pipelines to ensure performance, reliability, and data accuracy.
- Develop and maintain documentation for data schemas, pipeline architecture, and data engineering processes.
- Perform data profiling and exploratory data analysis to support data modeling and engineering initiatives.
- Contribute to the implementation of data governance, security, and compliance practices in alignment with company policies and regulatory requirements.
- Support continuous improvement of data infrastructure by identifying opportunities to enhance scalability, efficiency, and maintainability.
Required Qualifications
- Master’s degree in Computer Science, Engineering, Data Science, Information Systems, or a related technical field.
- 4 years of experience in data engineering, data analytics, software engineering, or a related technical role (internships or academic projects may be considered).
- Experience with relational databases and strong proficiency in SQL.
- Proficiency in at least one programming language commonly used in data engineering, such as Python, Java, or Scala.
- Understanding of data warehousing concepts and ETL/ELT pipeline development.
- Familiarity with cloud-based platforms such as AWS, Google Cloud Platform, or Azure.
- Exposure to distributed data processing tools such as Apache Spark, Kafka, or Airflow is a plus.