Senior Data Engineer creating analytical solutions and data pipelines for healthcare products at Press Ganey. Leading technology implementations and mentoring data engineering teams effectively.
Responsibilities
Create data architecture, pipelines, and analytical solutions to meet software and data science requirements for various PG-MX Healthcare Products
Identify, evaluate, select, and prove out new technologies and toolsets
Create and execute Proofs of Concept and Proofs of Technology
Lead and direct the work of others in data dependent projects
Collaborate with software development, business teams, analysts, and data scientists to establish data storage, pipeline, and structure requirements
Design, develop, and maintain ETL/ELT pipelines using Databricks (PySpark, Delta Lake, SQL)
Implement Data Lakehouse architecture leveraging Databrick Unity Catalog, Delta Live Tables, and Workflows
Build and optimize data ingestion frameworks for structured and unstructured data from diverse sources
Identify and plan for data storage performance requirements
Optimize Databricks clusters, jobs, and queries for performance and cost efficiency
Identify impact of implementation on other applications and databases
Lead and mentor data engineers on data projects
Implement CI/CD for data pipelines using Git, Databricks Repos, and DevOps tools
Ensure Data quality, reliability, security compliances across environments
Build and evolve Trusted Record systems to manage entities across the enterprise
Design, implement, and evolve solutions around person identity management.
Develop and enforce data governance, lineage, and cataloging standards
Identify areas of development and need, provide targeted training and exploration for team members.
Requirements
Minimum of 5 years Data Engineering experience in an enterprise environment
Bachelor’s degree in technology or like field required
Hands on experience of Azure data technologies (Databricks, Data Factory, Stream Analytics, Data Lake Storage, Synapse), on-premises Microsoft tools (SQL DB and SSIS), and familiar with AWS data technologies
Proficiency in Python, SQL and distributed data processing frameworks (Spark), and familiar with C#, PowerShell, and APIs
Significant experience with analytical solutions in relational databases such as MS SQL Server, Oracle, and DB2 as well as experience with NoSQL databases and solutions such as data lakes, document-oriented databases, and graph databases
Strong understanding of data modeling, schema design, and ETL best practices
Experience with data lineage, cataloging, and metadata management in Databricks and Unity Catalog
Skill in data modeling and experience with tools like ER/Studio or Erwin
Familiarity with version control (Git) and DevOps/CI-CD practices
Familiarity with SQL performance tuning and Spark optimization techniques
Excellent problem-solving and communication skills.
Benefits
Health insurance
401(k) matching
Flexible work hours
Paid time off
Discretionary bonus or commission tied to achieved results
Director leading strategy, governance, and delivery of enterprise data platform at Phillips 66. Partnering with AI, Data Science, and business teams to enhance analytics and business systems.
Product Owner driving ERP data migration initiatives for BioNTech’s global landscape. Leading effective data management and ensuring compliance with regulatory standards in a fast - paced environment.
Data Engineer II leading development and delivery of data pipelines for Syneos Health. Collaborating with teams to optimize data processing and integrate solutions into production environments.
Lead Data Engineer overseeing data operations and analytics engineering teams for OneOncology. Focused on operational excellence in data platform and model reliability for cancer care improvement.
Senior AWS Software Data Engineer at Boeing focusing on AWS Data services to support digital analytics capabilities. Collaborating with cross - functional teams to design, develop, and maintain software data solutions.
Senior Data Engineer designing and improving software for business capabilities at Barclays. Collaborating with teams to build a data and intelligence platform for Equity Derivatives.
Senior AI & Data Engineer developing and implementing AI solutions in collaboration with clients and teams. Working on projects involving generative AI, predictive analytics, and data mastery.
Consultant driving IA business growth in Deloitte's Artificial Intelligence & Data team. Delivering innovative solutions using data analytics and automation technologies.
Data Engineer responsible for managing data architecture and pipelines at Snappi, a neobank. Collaborating with teams to enable data processing and analysis in innovative banking solutions.
Data Engineer at Destinus developing the data platform to support production and analytics needs. Involves migrating Excel sources to Lakehouse and integrating ERP systems in a hybrid role.