Hybrid Data Engineer

Posted 11 hours ago

Apply now

About the role

  • Data Engineer strengthening data platform team at Samba TV to improve data analytics and reporting capabilities. Building on AWS, Databricks, BigQuery, and Snowflake technology.

Responsibilities

  • Build scalable data product architecture
  • Responsible for modernizing data frameworks and integrations with Databricks and BigQuery
  • Upgrade and reduce toil for developers on Apache Airflow
  • Develop and optimize data transformations using Apache Spark (PySpark/Scala)
  • Build procedures and guidelines to help teams operate with data
  • Identify bottlenecks in our development lifecycle and find solutions to improve them
  • Drive innovation throughout the tech org by evangelizing and educating teams on best practices and new technologies
  • Work directly with our data teams and FinOps teams to drive efforts that span across teams
  • Implement data governance, access control, and auditing using Databricks Unity Catalog
  • Build and integrate automated, reusable data validation suites using data quality frameworks (Great Expectations or similar)
  • Implement monitoring and anomaly detection systems for data quality, reliability and performance
  • Develop and manage REST APIs to support secure data access, automation, and integration
  • Collaborate with data scientists, analysts, and software engineers to deliver governed, reusable data assets
  • Implement monitoring, logging, and alerting for data workflows
  • Optimize cost and performance of cloud-based data infrastructure

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
  • 5+ years of experience in data engineering or a related role
  • Strong hands-on experience with Databricks, Apache Spark and BigQuery or Snowflake
  • Proven experience with Modern table formats such as Delta Lake and Iceberg
  • A deep understanding of the data lifecycle and how teams operate with data
  • Hands-on experience implementing data governance and metadata management using Databricks Unity Catalog
  • Experience managing and extending Apache Airflow (custom operators, plugins, infrastructure)
  • Experience with Kubernetes
  • Solid experience with AWS cloud services, especially S3 and data-related services
  • Experience with data validation and data quality principles and working with SLA systems
  • Proficiency in Python and SQL

Benefits

  • Equal opportunity employer
  • Celebration of diversity
  • Commitment to an inclusive environment

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job