Hybrid Data Engineer

Posted last week

Apply now

About the role

  • Design, build, and maintain scalable and secure ETL/ELT pipelines across Redshift, S3
  • Develop and manage AWS Glue jobs for automation, transformation, and catalog management
  • Implement Apache Airflow workflows for data orchestration and monitoring of cross-market processes
  • Optimize data ingestion and transformation performance using AWS-native tools and Redshift best practices
  • Architect and maintain VPCs, subnets, security groups, and gateways to secure data movement between AWS services
  • Apply AWS best practices for IAM policies, encryption, and secure access management
  • Integrate data from external partners into the centralized warehouse
  • Develop CI/CD pipelines in GitLab for automated deployment of data jobs, schema changes, and tests
  • Work with analysts and data scientists to prepare structured datasets for reporting, modeling, and insights

Requirements

  • Bachelor’s degree in Computer Science, Data Engineering, or related field
  • 3+ years of experience in data engineering or data infrastructure
  • AWS Certified Solutions Architect or Similar (VPC setup, S3, Glue, Redshift, IAM, Lambda, Gateway, CloudWatch)
  • Advanced proficiency in SQL and Python for data transformation and automation
  • Experience with GitLab for version control and CI/CD automation
  • Familiarity with Apache Airflow (DAG creation, scheduling, and monitoring)

Benefits

  • Competitive salary and performance-based bonus
  • Professional development and career growth opportunities
  • Flexible work arrangements and an inclusive culture
  • An exciting environment at the forefront of fintech innovation

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job