About the role

  • Data Engineer leading data foundation architecture and optimization for a Kenyan startup. Constructing data pipelines that fuel machine learning models and internal analytics.

Responsibilities

  • Architect and sustain scalable ETL workflows, guaranteeing consistency and accuracy across diverse data origins.
  • Refine and optimize data models and database structures specifically tailored for reporting and analytics.
  • Enforce industry best practices regarding data warehousing and storage methodologies.
  • Fine-tune data systems to handle the demands of both real-time streams and batch processing.
  • Manage the cloud data environment, utilizing platforms such as AWS, Azure, or GCP.
  • Coordinate with software engineers to embed data solutions directly into our product suite.
  • Design robust processes for ingesting both structured and unstructured datasets.
  • Script automated quality checks and deploy monitoring instrumentation to instantly detect data anomalies.
  • Build APIs and services that ensure seamless data interoperability between systems.
  • Continuously monitor pipeline health, troubleshooting bottlenecks to maintain an uninterrupted data flow.
  • Embed data governance and security protocols that meet rigorous industry standards.
  • Collaborate with data scientists and analysts to maximize the usability and accessibility of our data assets.
  • Maintain comprehensive documentation covering schemas, transformations, and pipeline architecture.
  • Keep a pulse on emerging trends in cloud tech, analytics, and data engineering to drive continuous improvement.

Requirements

  • A minimum of 3 years of professional experience in Data Engineering or a similar technical role.
  • Bachelor’s or Master’s degree in Engineering, Computer Science, Data Science, or a relevant discipline.
  • Expert-level command of SQL and management systems like PostgreSQL or MySQL.
  • Hands-on proficiency with pipeline tools such as Luigi, DBT, or Apache Airflow.
  • Practical experience with heavy-lifting technologies like Hadoop, Spark, or Kafka.
  • Proven skills with cloud data stacks, specifically Google BigQuery, AWS Redshift, or Azure Data Factory.
  • Strong programming logic in Java, Scala, or Python for data processing tasks.
  • Familiarity with data integration frameworks and API utilization.
  • Understanding of security best practices and compliance frameworks.

Benefits

  • Flexible work arrangements
  • Professional development opportunities

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job