How You’ll Shape Our SuccessThe purpose of this role is to design, build, and maintain scalable data pipelines and infrastructure that enable the efficient processing and analysis of large, complex data sets.
What You’ll Do*Develop and maintain automated data processing pipelines using Google Cloud:*
Design, build, and maintain data pipelines to support data ingestion, ETL, and storage
Build and maintain automated data pipelines to monitor data quality and troubleshoot issues
*Implement and maintain databases and data storage solutions:*
Stay up-to-date with emerging trends and technologies in big data and data engineering
Ensure data quality, accuracy, and completeness
*Implement and enforce data governance policies and procedures to ensure data quality and accuracy:*
Collaborate with data scientists and analysts to design and optimise data models for analytical and reporting purposes
Develop and maintain data models to support analytics and reporting
Monitor and maintain data infrastructure to ensure availability and performance
Requirements
What You’ll Need to Succeed
Experience with cloud platforms such as Amazon Web Services (AWS) or Google Cloud Platform (GCP).
Proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle.
Experience with big data technologies such as Hadoop, Spark, or Hive.
Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow.
Proficiency in at least one programming language such as Python, Java, or Scala.
Strong analytical and problem-solving skills with the ability to work independently and in a team environment.
Benefits
**Financial:**
Competitive base salary.
Discretionary company bonus scheme.
Employee referral scheme
Meal Vouchers
**Health & Wellbeing:**
Health Care Package
Life and Health Insurance
Bookster
**Time Off & Flexibility:**
28 days of annual leave
Floating bank holidays
An extra paid day off on your birthday.
Ten paid learning days per year.
Flexible working hours
Sabbatical leave (after 5 years).
Work from anywhere (up to 3 weeks per year).
**Development & Recognition:**
Industry-recognised training & certifications.
Bonusly: employee recognition and rewards platform.
Data Engineer at Kyndryl transforming raw data into actionable insights using ELK Stack. Responsible for developing, implementing, and maintaining data pipelines and processing workflows.
Senior Data Engineer at Clorox developing cloud - based data solutions. Leading data engineering projects and collaborating with business stakeholders to optimize data flows.
Data Engineer building solutions on AWS for high - performance data processing. Leading initiatives in data architecture and analytics for operational support.
Senior Data Engineer overseeing Databricks platform integrity, optimizing data practices for efficient usage. Leading teams on compliance while mentoring a junior Data Engineer.
Associate Data Engineer contributing to software applications development and maintenance using Python. Collaborating with teams for clean coding and debugging practices in Pune, India.
Lead Data Engineer responsible for delivering scalable cloud - based data solutions and managing cross - functional teams. Collaborating with global stakeholders and ensuring high - quality project execution in a fast - paced environment.
Data Engineer focusing on development and optimization of data pipelines in an insurance context. Ensuring data integrity and supporting data - driven decision - making processes.
Full Stack Data Engineer on a Central Engineering Portfolio Team in Chennai delivering curated data products and collaborating with data engineers and product owners.
Data Engineer designing and implementing data pipelines and services for Ford Pro analytics. Working with diverse teams and technologies to drive data - driven solutions.
Data Engineer developing best - in - class data platforms for ClearBank with a focus on data insights and automation. Collaborating closely with stakeholders and supporting data science initiatives.