Data Engineer specializing in digital transformation at Bounteous. Collaborating on data solutions and migration projects with a focus on data integrity and quality.
Responsibilities
Pipeline Migration
Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment.
Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity.
Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "handoff and sign-off" conversations with data owners to ensure migrated assets meet business requirements
Consumption Pattern Migration
Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg.
Usage analysis: Understand usage patterns to deliver the required data products.
Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "handoff and sign-off" conversations with data owners to ensure migrated assets meet business requirements.
Data Reconciliation & Quality: A rigorous approach to data validation is required. Candidates must work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows.
Requirements
Education: Bachelor’s or Master’s degree in Computer Science, Applied Mathematics, Engineering, or a related quantitative field.
Experience: Minimum of 5 years of professional "hands-on-keyboard" coding experience in a collaborative, team-based environment. Ability to trouble shoot (SQL) and basic scripting experience.
Languages: Professional proficiency in Python or Java.
Methodology: Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience.
Core Data Engineering Competencies: Candidates must demonstrate a sophisticated understanding of the following modeling concepts to ensure data correctness during reconciliation:
Temporal Data Modeling: Managing state changes over time (e.g., SCD Type 2).
Schema Management: Expertise in Schema Evolution (Ref: Iceberg Apache) and enforcement strategies.
Performance Optimization: Advanced knowledge of data partitioning and clustering.
Architectural Theory: Balancing Normalization vs. Denormalization and the strategic use of Natural vs. Surrogate Keys.
Technical Stack Requirements: Extraction & Logic: Kafka, ANSI SQL, FTP, Apache Spark Data Formats: JSON, Avro, Parquet Platforms: Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ. Candidate will also need to work with our internal data management platform, and must have an aptitude for learning new workflows and language constructs is essential.
Data Engineer responsible for developing research analytic data infrastructure at Sutter Health. Involves managing data quality, pipelines, and compliance with healthcare regulations.
Senior Data Engineer designing impactful data solutions for clients at Simple Machines. Collaborating with engineers to build data platforms and pipelines in a hybrid workplace.
Journeyman Data Engineer at Leidos supporting DoD enterprise data and analytics. Develop and maintain data pipelines and data models with a focus on national security outcomes.
Senior Data Engineer at Corient designing and maintaining data pipelines for wealth management. Overseeing sprint planning and supporting cross - functional data initiatives to ensure data integrity.
Data Engineer responsible for designing and implementing data pipelines at United Community. Collaborating across teams to support data warehouse and maintenance of data products.
Data Engineer designing and building production data pipelines for AI and ML workloads at Capgemini Engineering. Focus on end to end data lifecycle management and AWS infrastructure.
Data Engineer designing and implementing scalable data architecture for HR and people analytics. Collaborating with teams to ensure reliable data pipelines and integration using modern technologies.
Senior Data Engineer architecting and maintaining scalable data systems while collaborating with cross - functional teams at SpotOn, aimed at empowering independent restaurants.
Analytics & Data Engineer joining a data - driven team at Adlook, building data products and automating data pipelines. Collaborate across teams to enhance data analysis and AI functionality.
Data Engineer building data foundations for People Analytics at Notion. Designing data systems and collaborating with People leadership to enhance workforce decision - making.