About the role

  • Data Engineer building and maintaining Azure data platforms for Hultafors Group's analytics and reporting needs. Collaborating across various business functions in a cloud environment.

Responsibilities

  • Build and run our central Data Foundation platform, the single source of truth for the organization
  • Work mainly in Azure with a lakehouse architecture (Azure Data Lake, Data Factory, Databricks) to collect, transform and consolidate data from ERPs, CRMs and other systems
  • Enable analytics and Power BI reporting for Finance, Sales, Logistics, Sustainability, Management and other functions
  • Design, develop and maintain data pipelines and models in our Data Foundation platform
  • Build and optimize ingestion in Azure Data Factory from ERPs, CRMs and internal/external sources
  • Develop and maintain transformation logic in Databricks (SQL and Python) and implement scalable lakehouse / dimensional models
  • Profile, map and validate data to ensure high data quality and consistency for downstream analytics
  • Monitor and operate pipelines and platform components (scheduling, performance, availability, error handling) according to SLAs
  • Troubleshoot and resolve incidents, perform root cause analysis and implement preventative improvements
  • Act as a 2nd/3rd line expert for data platform and pipeline issues
  • Contribute to architecture, standards, reusable components and best practices
  • Collaborate with business stakeholders, application owners, BI developers, data analysts, architects and vendors to translate requirements into data solutions
  • Create and maintain technical documentation for pipelines, datasets and models

Requirements

  • Bachelor’s degree in computer science, Information Systems, Engineering, Mathematics or similar, or equivalent experience
  • 3–5+ years as Data Engineer, BI/Data Warehouse Developer or similar
  • Hands-on experience with Azure Data Lake, Azure Data Factory and Databricks
  • Strong skills in building and operating batch and/or streaming data pipelines in a lakehouse or data warehouse
  • Solid SQL and Python skills for data engineering, ideally in Databricks
  • Good understanding of relational/analytical databases and performance optimization
  • Knowledge of data governance, data quality, security and handling of sensitive data
  • Experience with monitoring, logging, alerting and ITSM processes (incident/problem/change management)
  • Familiarity with CI/CD, version control and automated testing for data pipelines

Benefits

  • Competitive salary
  • Flexible working hours
  • Professional development opportunities

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job