About the role

  • Data Engineer responsible for creating data pipelines and platforms using GCP and AWS. Collaborating with ambitious teams on complex data engineering challenges in a hybrid environment.

Responsibilities

  • Work on data infrastructure, pipelines, platforms, and processing systems
  • Build and evolve data infrastructure that scales
  • Embed in a data engineering environment spanning multiple cloud platforms
  • Handle complex challenges with large-scale data pipelines, distributed processing, and real-time streaming
  • Collaborate with technical teams and stakeholders

Requirements

  • Hands-on experience with Google Cloud Platform (GCP) for data engineering workloads
  • Strong proficiency in Python
  • Practical experience with Big Data tooling, including Apache Spark, Apache Kafka, Flink, Elasticsearch, Hadoop, Hive
  • Solid knowledge of data modeling and database design principles
  • Experience with data integration and ETL/ELT pipelines
  • Strong SQL skills with experience across both relational and NoSQL databases
  • Familiarity with additional cloud platforms including AWS or Azure
  • Experience using Git for version control
  • English at B2 level or above

Benefits

  • Flexible work model
  • Collaborative and technically ambitious team

Job title

Data Engineer – GCP, Python, AWS

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

No Education Requirement

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job