Hybrid Data Platform Engineer – Spark, ETL focus

Posted last month

Apply now

About the role

  • Design and develop scalable, distributed, and resilient software systems.
  • Build, test, and optimize Spark-based ETL pipelines in Scala or Python.
  • Collaborate with researchers and data scientists to deliver production-ready data and models.
  • Participate in architecture discussions, peer reviews, and deployment processes.
  • Define and monitor technical and operational metrics to ensure system health and performance.
  • Continuously identify areas for optimization and efficiency improvements across services.
  • Mentor junior engineers and contribute to engineering best practices.
  • Drive innovation by staying current with emerging tools and technologies.

Requirements

  • 5+ years of software engineering experience
  • Strong programming skills in Scala, Python, or Java
  • Spark experience for building and optimizing ETL jobs
  • Experience with Databricks (DBX) – workflows, data development, and debugging
  • Solid understanding of microservices architecture and distributed systems
  • Experience with AWS, Docker, and Kubernetes
  • Familiarity with Spring Boot, Terraform, and infrastructure-as-code
  • Strong analytical and problem-solving skills

Job title

Data Platform Engineer – Spark, ETL focus

Job type

Experience level

Mid levelSenior

Salary

$93 per hour

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job