Senior Data Engineer in PwC's FCU Technology Team designing scalable data pipelines and collaborating on analytical solutions. Working with Databricks and supporting junior engineers in a hybrid work model.
Responsibilities
Design, build and maintain scalable, reliable data pipelines and data platforms supporting analytical and reporting solutions
Work on end-to-end data engineering solutions – from data ingestion, through transformation and storage, to serving curated datasets for analytics and reporting
Develop and optimize ETL / ELT pipelines using Databricks (Apache Spark, SQL, Python) and Delta Lake technologies
Responsible for data modelling, data structures and performance optimization in analytical data stores (lakehouse / data warehouse)
Implement and maintain data quality, data validation and monitoring mechanisms ensuring accuracy, consistency and reliability of processed data
Collaborate closely with data analysts, BI developers and business stakeholders to translate business and regulatory requirements into robust technical solutions
Contribute to architecture decisions related to data platforms, data processing patterns and technology choices
Support and mentor junior data engineers, helping them grow their technical and consulting competencies
Actively participate in client-facing work – discussing requirements, presenting solutions and explaining technical concepts in an accessible way
Keep up with latest data engineering, cloud and Anti Financial Crime trends and contribute to internal initiatives and accelerators
Requirements
Master’s degree (preferably in Computer Science, Data Engineering, Mathematics, Statistics or similar)
Commercial experience in data engineering, database development or data platform roles
Strong understanding of data engineering fundamentals: ETL/ELT, data warehousing, lakehouse architectures
Hands-on experience with Databricks, including: – building and maintaining Spark-based batch and/or streaming data pipelines, – working with Delta Lake (ACID tables, schema evolution, incremental processing, merges), – optimizing performance (partitioning, file compaction, query optimization), – developing pipelines using Databricks notebooks, jobs and workflows
Very good knowledge of SQL (designing, writing and optimizing complex queries)
Experience with Python for data processing and transformations (e.g. pandas, PySpark)
Solid understanding of data modelling, data quality and data governance concepts
Experience working with cloud-based data platforms (Azure preferred)
Ability to gather and translate business requirements into technical solutions
Excellent communication skills and ability to work with both technical and non-technical stakeholders
Ability to work effectively under pressure while maintaining a high level of accuracy
Fluent written and spoken English
Willingness to work in international project teams
Nice to have: Experience with streaming data processing (e.g. Spark Structured Streaming), Knowledge of data governance or metadata tools (e.g. Collibra), Experience in financial services, AML / AFC or regulatory-driven environments, Additional languages: German, Dutch or French
Benefits
Work flexibility - hybrid working model, flexible start of the day, sabbatical leave
Development and upskilling - our full support during onboarding process, mentoring from experienced colleagues, training sessions, workshops, certification co/financed by PwC and conversations with native speaker
Wide medical and well-being program - a medical care package (incl. physiotherapy, discounts on dental care), coaching, mindfulness sessions, psychological support, education through dedicated webinars and workshops, financial and legal advice
Possibility to create your individual benefits package (a.o. lunch pass, insurance packages, concierge, veterinary package for a pet, massages) and access to a cafeteria - vouchers, discounts on IT equipment and car purchase, 3 paid hours for volunteering per month
Director of Data Engineering leading data architecture and analytics at Petfolk. Overseeing data infrastructure and managing a data team to drive AI and business intelligence solutions.
Senior Data Engineer managing end - to - end data pipelines with Google Cloud Platform. Collaborating closely with product teams to deliver scalable data solutions in a hybrid setting.
GCP Data Engineer designing, building, and optimising data solutions on Google Cloud Platform. Collaborating with clients to solve complex data challenges and enhance AI capabilities.
Data Engineer developing scalable data solutions across multi - cloud environments for clients. Mentoring junior engineers while ensuring data quality and promoting best practices within the team.
Consultant Data Engineer for modern data transformations at Intelligen. Work on dbt and Snowflake projects for enterprise clients to optimize data pipelines.
Data Engineer designing and delivering modern data solutions across multi - cloud environments for clients in Australia. Collaborating and mentoring while contributing to meaningful projects in a high - performing team.
Principal Product Manager leading GEICO's Customer Data Platform development and strategy. Collaborating with cross - functional stakeholders to improve customer engagement through data - driven solutions.
Senior Product Manager at GEICO managing the Customer Data Platform, collaborating across teams for data - driven solutions. Evolving customer engagement using innovative data strategies.
Data Engineering Lead at Absa enabling analytics and AI through scalable data platforms. Overseeing engineering teams to deliver high - quality, trusted data solutions.