Data Engineer designing, developing, and maintaining data products by liaising with stakeholders. Requires strong skills in Python, SQL, and big data technologies.
Responsibilities
Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
Create the data integration and data diagram documentation.
Lead the data validation, UAT and regression test for new data asset creation.
Create and maintain data models, including schema design and optimization.
Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.
Requirements
Strong knowledge on Python and Pyspark
Expectation is to have ability to write Pyspark scripts for developing data workflows.
Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
Expectation is to have strong problem-solving and troubleshooting skills.
Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
5-7 years of experience in Data Engineer.
Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.
IT Data Engineering Co‑Op at BlueRock Therapeutics supports development of scientific data systems. Collaboration on data workflows and foundational AWS data engineering tasks.
Data Engineer I building and operationalizing complex data solutions for Travelers' analytics using Databricks. Collaborating within teams to educate end users and support data governance.
Data Engineer shaping modern data architecture to drive golf’s digital transformation. Collaborating with teams to enhance data pipelines and insights for customer engagement and revenue growth.
Staff Data Engineer overseeing complex data systems for CITY Furniture. Responsible for architecting and optimizing data ecosystems in a hybrid work environment.