Hybrid Principal Data Engineer

Posted 4 days ago

Apply now

About the role

  • Principal Data Engineer responsible for designing scalable Snowflake architectures and managing data solutions. Collaborating on data engineering projects and optimizing performance in a hybrid environment.

Responsibilities

  • Design and implement scalable Snowflake data architectures to support enterprise data warehousing and analytics needs
  • Optimize Snowflake performance through advanced tuning, warehousing strategies, and efficient data sharing solutions
  • Develop robust data pipelines using Python and DBT, including modeling, testing, macros, and snapshot management
  • Implement and enforce security best practices such as RBAC, data masking, and row-level security across cloud data platforms
  • Architect and manage AWS-based data solutions leveraging S3, Redshift, Lambda, Glue, EC2, and IAM for secure and reliable data operations
  • Orchestrate and monitor complex data workflows using Apache Airflow, including DAG design, operator configuration, and scheduling
  • Utilize version control systems such as Git to manage codebase and facilitate collaborative data engineering workflows
  • Integrate and process high-volume data using Apache ecosystem tools such as Spark, Kafka, and Hive, with an understanding of Hadoop environments

Requirements

  • 12 - 15 years of experience, including significant hands-on expertise in Snowflake data architecture and data engineering
  • Advanced hands-on experience with Snowflake, including performance tuning and warehousing strategies
  • Expertise in Snowflake security features such as RBAC, data masking, and row-level security
  • Proficiency in advanced Python programming for data engineering tasks
  • In-depth knowledge of DBT for data modeling, testing, macros, and snapshot management
  • Strong experience with AWS services including S3, Redshift, Lambda, Glue, EC2, and IAM
  • Extensive experience designing and managing Apache Airflow DAGs and scheduling workflows
  • Proficiency in version control using Git for collaborative development
  • Hands-on experience with Apache Spark, Kafka, and Hive
  • Solid understanding of Hadoop ecosystem
  • Expertise in SQL (basic and advanced), including SnowSQL, PLSQL, and T-SQL
  • Strong requirement understanding, presentation, and documentation skills; ability to translate business needs into clear, structured functional/technical documents and present them effectively to stakeholders.

Job title

Principal Data Engineer

Job type

Experience level

Lead

Salary

$140,000 - $150,000 per year

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job