Senior Data Engineer designing and overseeing data pipelines in Databricks on AWS. Responsible for data quality and performance for enterprise analytics and AI workloads.
Responsibilities
Design, build and maintain scalable ETL/ELT pipelines that ingest, transform and deliver trusted data for analytics and AI use cases.
Build data integrations with well-known SaaS platforms such as Salesforce, NetSuite, Jira and others.
Implement incremental and historical data processing to ensure accurate, up-to-date data sets.
Ensure data quality, reliability and performance across pipelines through validation, testing and continuous code optimization.
Contribute to data governance and security by supporting data lineage, metadata management and data access controls.
Support production operations, including monitoring, alerting and troubleshooting.
Work with stakeholders to translate business and technical requirements into well-structure, reliable datasets.
Share knowledge and contribute to team standards, documentation and engineering best practices.
Requirements
Data Ingestion & Integration: hands-on experience building robust ingestion pipelines using tools and patterns such as Databricks Auto Loader, Lakeflow Connectors, Fivetran and/or custom API / file-based integrations.
Core Data Engineering: strong development experience using SQL, Python and Apache Spark (PySpark) for large-scale data processing.
Data Pipeline Orchestration: proven experience developing and operating data pipelines using Databricks Workflows & Jobs, Delta Live Tables (DLT) and/or Lakeflow Declarative Pipelines.
Incremental Processing & Data Modelling: deep understanding of incremental data loading, including Change Data Capture (CDC), MERGE operations and Slowly Changing Dimensions (SCD) in a Lakehouse environment.
Data Transformation & Lakehouse Design: experience in designing and implementing Medallion Architecture (bronze, silver and gold) using Delta Lake.
Data Quality, Test and Observability: experience implementing data quality checks with tools and frameworks such as DLT expectations, Great Expectations or similar, including pipeline testing and monitoring.
Data Governance & Lineage: hands-on experience with data cataloguing, lineage and metadata management within Unity Catalog to support governance, auditing and troubleshooting.
Performance Optimization: experience tuning Spark and Databricks workloads, including partitioning strategies, file sizing, query optimization and efficient use of Delta Lake features.
Production Engineering Practices: experience working with code versioning (Git), peer review and promoting pipelines through development, test and production environments.
Security & Access Control Awareness: Understanding of data access control, sensitive data handling and working with Unity Catalog in the context of governed environments.
Benefits
We strive to make any required adjustments where possible to make the process fair and equitable for everyone. Please reach out to [email protected] if you need any accommodations throughout the interview process. Nuix is an equal opportunities employer. Don’t let imposter syndrome hold you back! We welcome all applications and are a flexible employer. As we expand our global team and extend our skills and expertise, we are unified as one Nuix team guided by our shared values. Nuix creates innovative software that empowers organizations to simply and quickly find the truth from any data in a digital world. We are a passionate and talented team, delighting our customers with software that transforms data into actionable intelligence. Love the role, but not the right fit for you? Know someone that might be awesome for this role? We're always looking for talented people who want to make a real impact. If you refer someone and we successfully hire them, you'll receive a $1,000 gift card.
AI Data Pipeline Engineer designing and operating high - throughput systems for petabyte - scale data delivery. Collaborating across teams to ensure data flows into AI workloads efficiently.
AWS Data Engineer role focusing on AWS technologies in Gurugram, Haryana, India. Responsibilities include AWS data engineering tasks and collaboration with team members.
Data Engineer implementing innovative technology for various domains at Quantexa. Building data pipelines and providing insights to help clients solve complex business problems.
Principal Consultant Data Architecture leading complex Data and Analytics projects in a hybrid environment. Responsible for designing enterprise data architectures and mentoring technical teams.
Consultant / Senior Consultant in Data Engineering & Data Science contributing to data solutions. Collaborating with cross - functional teams in a hybrid environment in Germany.
Senior Data Engineer managing data platform strategy and analytics architecture at HALOS scaleup company. Owning design and implementation of analytical data platform.
Data Engineer building trusted data platforms for decision making at Lyrebird Health. Collaborating with teams to develop and maintain data pipelines and analytics - ready tables.
Data Engineer at leading online insurance platform for businesses. Delivering data pipelines and collaborating cross - functionally to enhance decision - making.