Hybrid Experienced Data Engineer – Python, ETL, Scalable Pipelines

Posted last week

Apply now

About the role

  • Design, build, and optimize data pipelines that collect and transform large-scale product datasets
  • Work with Python, SQL, and modern orchestration tools to make our data ecosystem more robust and scalable
  • Refactor and optimize data workflows for speed, quality, and maintainability
  • Implement monitoring and observability (logging, alerting, data validation) to ensure reliability
  • Partner with the data engineer and backend teams to ship features that push our services forward
  • Contribute ideas to improve architecture and guide data best practices

Requirements

  • 3-5+ years of experience in Python for data engineering or ETL development
  • Strong understanding of SQL, relational data models, and schema design
  • Experience building or maintaining ETL/ELT pipelines (Airflow, Prefect, dbt, or similar)
  • Familiarity with cloud environments (AWS, GCP, or Azure)
  • Analytical mindset - you care about data quality, performance, and scale
  • Collaborative communicator who enjoys working across teams
  • Bonus: experience with web scraping, APIs, or data ingestion at scale

Benefits

  • private health insurance
  • 25 days paid leave + your birthday off
  • monthly lunch allowance
  • social team events
  • support for courses, conferences, and certifications

Job title

Experienced Data Engineer – Python, ETL, Scalable Pipelines

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job