Big Data Engineer developing applications for Synchrony’s Enterprise Data Lake within an Agile scrum team. Collaborating to deliver high-quality data ingestion and maintain data governance standards.
Responsibilities
Develop big data applications for Synchrony in Hadoop ecosystem
Participate in the agile development process including backlog grooming , coding, code reviews , testing and deployment
Work with team members to achieve business results in a fast paced and quickly changing environment
Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio
Provide data analysis for Synchrony’s data ingestion, standardization and curation efforts ensuring all data is understood from a business context
Identify enablers and level of effort required to properly ingest and transform data for the data lake.
Profile data to assist with defining the data elements, propose business term mappings, and define data quality rules
Work with the Data Office to ensure that data dictionaries for all ingested and created data sets are properly documented in data dictionary repository
Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories
Assist with the creation and implementation of data quality rules
Ensure the proper identification of sensitive data elements and critical data elements
Create source-to-target data mapping documents
Test current processes and identify deficiencies
Investigate program quality to make improvements to achieve better data accuracy
Understand functional and non-functional requirement and prepare test data accordingly
Plan, create and manage the test case and test script
Identify process bottlenecks and suggest actions for improvement
Execute test script and collect test results
Present test cases, test results, reports and metrics as required by the Office of Agile
Perform other duties as needed to ensure the success of the team and application and ensure the team’s compliance with the applicable Data Sourcing, Data Quality, and Data Governance standards
Requirements
Bachelor's degree OR in lieu of Bachelor's degree, High School Diploma/ GED and minimum 2 years of Information Technology experience
Minimum of 1 year of Hands-on experience writing shell scripts , complex SQL queries, Hive scripts, Hadoop commands and Git
Ability to write abstracted, reusable code components
Programming experience in at least one of the following languages: Scala, Java or Python
Analytical mindset
Willingness and aptitude to learn new technologies quickly
Superior oral and written communication skills ; Ability to collaborate across teams of internal and external technical staff, business analysts, software support and operations staff.
Senior Data Engineer responsible for migrating and modernising data platforms in banking. Rebuilding critical data platform with a focus on risk and core financial data flows.
Data Engineering Lead managing enterprise - scale data platforms using AWS, Snowflake, and Databricks in financial services. Leading data engineering teams and ensuring data governance.
AWS Data Engineer working in Gurugram to support data architecture and integration solutions. Collaborating and translating business needs into data models.
Senior Data Engineer handling data engineering responsibilities in hybrid setting for banking industry. Collaborating with cross - functional teams and maintaining data quality in Azure environments.
Data Management professional at Kyndryl involved in creating innovative data solutions and ensuring the seamless operation of complex data systems. Collaborating with teams to transform requirements into scalable database solutions.
Software Engineer designing and developing scalable data processing applications on cloud infrastructure for Thomson Reuters. Collaborating with Data Analysts on AI - enabled solutions for data management and insight generation.
Manager of Data Platform overseeing AWS cloud infrastructure and Snowflake data warehouses for Thomson Reuters. Leading the design and implementation of data processing applications in a hybrid role located in Bengaluru.
Senior Data Engineer designing scalable data pipelines and solutions for Enterprise Data Lake at Thomson Reuters. Collaborating across teams to ensure efficient data ingestion and accessibility.
Senior Data Engineer at Technis developing scalable data pipelines and solutions for innovative connected spaces products. Collaborating within a cross - functional team to deliver high - quality data - driven outcomes.