Data Engineer designing, developing, and maintaining data products by liaising with stakeholders. Requires strong skills in Python, SQL, and big data technologies.
Responsibilities
Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
Create the data integration and data diagram documentation.
Lead the data validation, UAT and regression test for new data asset creation.
Create and maintain data models, including schema design and optimization.
Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.
Requirements
Strong knowledge on Python and Pyspark
Expectation is to have ability to write Pyspark scripts for developing data workflows.
Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
Expectation is to have strong problem-solving and troubleshooting skills.
Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
5-7 years of experience in Data Engineer.
Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
Data Engineer engineering DUAL Personal Lines’ strategic data platforms for global insurance group. Providing technical expertise in data engineering and collaborating with internal teams for solution delivery.
Data Engineer role focused on creating and monitoring data pipelines in an innovative energy company. Collaborate with IT and departments to ensure quality data availability in a hybrid work environment.
SQL Migration Data Engineer at Auxo Solutions focusing on Azure SQL/Fabric Lakehouse migrations and building data pipelines. Collaborating on technical designs and data governance for modernization initiatives.
Data Engineer developing cloud solutions and software tools on Microsoft Azure big data platform. Collaborating with various teams for data analysis and visualization in healthcare.
Boomi Integration Architect designing and leading integration solutions for data warehouses. Collaborating with cross - functional teams to implement scalable integration patterns using Boomi technologies.
Seeking a Boomi Integration Architect specializing in Data Warehouse and Master Data Hub implementations. Responsible for designing high - performance integration solutions across enterprise platforms.
Principal Data Engineer at Serko enhancing global travel tech through data - driven solutions. Collaborating across teams in Bengaluru to drive innovative engineering and best practices.
Data Engineer at Keyrus responsible for building and optimizing data pipelines for major projects. Contributing to data solutions and ensuring data quality in a growing team.