About the role

  • Data Engineer developing and maintaining data infrastructure at Highmark Health. Responsible for ensuring efficient and reliable flow of data across various systems.

Responsibilities

  • Design, develop, and maintain robust data processes and solutions to ensure the efficient movement and transformation of data across multiple systems
  • Develop and maintain data models, databases, and data warehouses to support business intelligence and analytics needs
  • Collaborate with stakeholders across IT, product, analytics, and business teams to gather requirements and provide data solutions that meet organizational needs
  • Monitor work against the production schedule, provide progress updates, and report any issues or technical difficulties to lead developers regularly
  • Implement and manage data governance practices, ensuring data quality, integrity, and compliance with relevant regulations
  • Collaborate on the design and implementation of data security measures
  • Perform data analysis and provide insights to support decision-making across various departments
  • Stay current with industry trends and emerging technologies in data engineering, recommending new tools and best practices as needed
  • Other duties as assigned or requested.

Requirements

  • 3 years of experience in design and analysis of algorithms, data structures, and design patterns
  • 3 years of experience in a data engineering, ETL development, or data management role
  • 3 years of experience in SQL and experience with database technologies (e.g., MySQL, PostgreSQL, MongoDB)
  • 3 years of experience in data warehousing concepts and experience with data warehouse solutions (e.g., Snowflake, Redshift, BigQuery)
  • Proficiency in Python for data manipulation, scripting, API integrations, and developing robust data engineering solutions
  • Strong SQL skills for complex data extraction, transformation, and loading (ETL/ELT) across various database systems
  • Demonstrated experience with version control systems (Git/GitHub/GitLab) for collaborative development, code management, and deployment best practices
  • Experience implementing and following SDLC best practices for data solutions
  • Hands-on experience with Google BigQuery or comparable cloud-native data warehouse solutions
  • Knowledge of data governance, data quality, and data lineage best practices
  • Experience with dimensional modeling (Star Schema, Snowflake Schema) and other data modeling techniques
  • Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources
  • Experience building and managing data pipelines using workflow orchestration tools like Apache Airflow or Google Cloud Composer
  • Proficiency with Google Cloud Platform (GCP) services relevant to data engineering, such as BigQuery, Cloud Storage, Cloud Functions, and Pub/Sub
  • Familiarity with Data Lake/Lakehouse concepts and their application in modern data architectures
  • Solid understanding and practical application of dbt (data build tool) for data transformation, testing, documentation, and version control of data models
  • Experience with Starburst for data federation and analytics.

Benefits

  • Health insurance
  • Retirement plans
  • Paid time off
  • Flexible work arrangements
  • Professional development

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

$67,500 - $126,000 per year

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job