About the role

  • Design and develop scalable, robust data ingestion and transformation pipelines on Azure and Databricks.
  • Build lakehouse-based solutions organized in layered architecture (bronze, silver, gold).
  • Ensure pipeline performance and scalability through best practices in partitioning, caching, and parallelism.
  • Automate processes with a focus on reusability, monitoring, and quality control.
  • Serve as the technical reference for the data engineering team, promoting best practices and standardization.

Requirements

  • Senior-level professional with experience in data engineering.
  • Able to design and develop scalable, robust pipelines in cloud environments.
  • Advanced knowledge of Azure and Databricks.
  • Familiarity with lakehouse architecture best practices and data governance.
  • On-site presence required 3 times per week in the Faria Lima area.
  • Experience with transactional databases and requirements analysis.
  • Knowledge of data governance.
  • Proficiency with Azure Data Lake Gen2, Azure Data Factory, Key Vault, and integration across Azure services.
  • Proficiency in Spark (PySpark) and Python for pipeline development and data transformation.
  • Experience with Git and CI/CD applied to data pipelines.
  • Advanced English.

Benefits

  • Multi-benefit card – choose how and where to use it.
  • Scholarships for undergraduate, postgraduate, MBA and language courses.
  • Certification incentive programs.
  • Flexible working hours.
  • Competitive salaries.
  • Annual performance review with a structured career plan.
  • Opportunity for international career development.
  • Wellhub and TotalPass.
  • Private pension plan.
  • Childcare assistance.
  • Health insurance.
  • Dental insurance.
  • Life insurance.

Job title

Senior Data Engineer

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job