About the role

  • Oversee maintenance and optimization of data pipelines within the Databricks platform
  • Design, develop, and maintain scalable data workflows and ETL processes integrated with AWS
  • Collaborate with cloud and frontend teams to unify data sources and establish a coherent data model
  • Guarantee availability, integrity, and performance of data pipelines and proactively monitor workflows to maintain high data quality
  • Engage with cross-functional teams to identify opportunities for data-driven enhancements and insights
  • Analyze platform performance, identify bottlenecks, and recommend improvements
  • Develop and maintain comprehensive technical documentation for ETL implementations
  • Stay abreast of Databricks/Spark features and best practices

Requirements

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • Minimum of 5 years' experience as a Data Engineer with Databricks pipeline implementation
  • Strong expertise in PySpark
  • Proficiency in SQL and scripting languages (Python)
  • Experience in cloud environments (AWS or Azure) is a plus
  • Strong communication skills in French (advanced) and fluency in English
  • Familiarity with industry-specific regulations and compliance (preferred)
  • Previous experience in the energy trading domain (nice-to-have)
  • Excellent analytical and problem-solving skills
  • Detail-oriented with effective task prioritization skills

Benefits

  • Technical and human support for each project and effective career management
  • Training to develop your professional skills
  • Participation in special dedicated events
  • Join a dynamic team

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job