DataOps Engineer at Eeze focusing on data pipeline stability across multiple products. Collaborating with IT teams to maintain quality, observability, and operational efficiency.
Responsibilities
Ensure the reliable and timely execution of daily data pipelines and scheduled workflows.
Operate and maintain internal data services, including ingestion layers, OLAP/lake storage, materialised views, and task dependencies.
Contribute to CI/CD workflows for data pipelines and participate in deployments, version management, and change control.
Monitor orchestration systems (e.g., Airflow), troubleshoot pipeline failures, delays, and anomalies, and drive continuous performance improvements.
Implement and maintain data quality checks, anomaly detection, schema validation, and audit processes.
Collaborate with Data Engineers on table lifecycle management, storage optimisation, partitioning strategies, and schema evolution.
Work with IT Infrastructure and IT Operations teams to improve platform observability, including logging, metrics, and alerting.
Develop and maintain SOPs, platform standards, best practices, and troubleshooting documentation.
Provide operational support to internal users (DE/DA/DS/Ops) for issues such as query performance, missing data, or inconsistent KPIs.
Requirements
2+ years of experience in Data Ops, Data Engineering, BI Engineering, or a similar operational data role.
Experience with CI/CD workflows, Docker, Kubernetes, or other DevOps-related practices.
Hands-on experience with workflow orchestration tools such as Airflow (or equivalent).
Familiarity with mainstream data engineering technologies such as Kafka, Spark, Flink, Delta Lake, Iceberg, Hudi, ClickHouse, or Doris.
Good understanding of data warehousing concepts, including partitioning, schema evolution, table lifecycle management, and OLAP vs. data lake architectures.
Strong SQL skills and familiarity with Python for scripting, automation, or validation.
Strong debugging and problem-solving skills, especially for data anomalies and pipeline failures.
Comfortable working cross-functionally with DE/Infra/Ops/DA/DS teams in a fast-paced environment.
Data Engineer developing and enhancing data pipelines and models at ERNI Schweiz. Required skills include SQL and Python with opportunities for remote work in Europe.
Senior Data Engineer developing ETL and data pipelines for Burlington’s digital transformation team. Collaborating with analytics and engineering teams to support insights from data analysis.
Data Engineer responsible for Azure SQL database development in a leading Norwegian damage service company. Engage in data quality, integration, and collaboration on analytical tools.
GCP Data Engineer responsible for building and optimizing scalable data pipelines using GCP services. Develop, maintain, and ensure data quality in ETL/ELT workflows with Python and SQL.
Data Engineer responsible for building CloudPay's modern data platform for payroll and HR solutions. Collaborating across teams to optimize data pipelines and AI initiatives in a fast - paced environment.
Experienced Database Administrator managing SQL Server environments and cloud data engineering. Collaborating with teams to optimize data solutions at Clear Channel Outdoor.
Data Engineer designing modern data infrastructures and pipelines for restaurant industry improvement. Collaborating with teams to ensure data quality and accessibility for strategic decisions.
Data Engineer specializing in Databricks at Sogeti implementing large - scale data processing solutions. Collaborating with Data Scientists and optimizing ETL processes for data engineering best practices.
Data Engineer at the forefront of digital transformation, implementing modern database structures and pipelines. Collaborating closely with clients to ensure data quality and system stability.
Senior Data Engineer designing and implementing scalable data architectures and pipelines for clients. Collaborating in agile teams while utilizing modern data platforms and cloud environments.