Data Engineer at Mobiz designing, building, and maintaining scalable data solutions for analytics. Collaborating with teams to leverage modern cloud technologies and improve data-driven decision-making.
Responsibilities
Design, build, test, deploy, and maintain robust data pipelines using Databricks, Azure Data Factory, and Microsoft Fabric, supporting both batch and streaming workloads.
Ingest, integrate, and harmonize data from multiple cloud and on-premises sources to deliver a unified, reliable, and trusted data view for analytics and reporting.
Use Python and SQL to clean, transform, validate, and model data to meet business, reporting, and analytical requirements.
Contribute to data architecture and platform design decisions, ensuring solutions are scalable, secure, cost-effective, and aligned with long-term business and technical strategy.
Build and maintain metadata-driven pipelines that gracefully adapt to schema changes, new data sources, and evolving ingestion patterns.
Proactively identify data quality issues, pipeline failures, and performance bottlenecks; perform root-cause analysis and implement durable, long-term solutions.
Continuously optimize data pipeline performance, reliability, and scalability to ensure fast, consistent, and dependable access to data.
Implement automated data quality checks, validation rules, and monitoring dashboards to ensure accuracy, consistency, and trust in delivered datasets.
Monitor, analyze, and manage data pipeline and compute costs across Azure, Databricks, and Microsoft Fabric, recommending and implementing cost-optimization strategies without compromising performance or availability.
Automate routine and repetitive data engineering tasks, applying best practices for CI/CD, deployment, testing, and operational support.
Design, implement, and maintain secure data access controls using role-based access, audits, and governance best practices to ensure appropriate data security and compliance.
Maintain clear, accurate, and up-to-date technical documentation, and provide ongoing operational support as data platforms and business needs evolve.
Stay current with emerging data engineering technologies, cloud services, and analytics best practices, and evaluate new tools through proofs of concept when appropriate.
Collaborate closely with product managers, software engineers, analysts, and operations teams to embed high-quality data solutions into core business processes.
Promote and enforce data engineering best practices including version control, modular and reusable pipeline design, documentation standards, and scalable architecture patterns.
Enable and support data democratization by providing well-modeled, secure, and self-service-ready datasets through Power BI, Microsoft Fabric, or custom analytics solutions.
Deliver measurable outcomes such as highly reliable pipelines with minimal manual intervention, on-time pipeline execution within service windows, rapid resolution of critical incidents, and consistently high data quality.
Ensure reusable and standardized pipeline components are leveraged across teams to accelerate delivery, improve consistency, and reduce operational overhead.
Provide stakeholders with reliable, well-modeled datasets that can be confidently explored without requiring ongoing engineering support.
Ensure all data solutions comply with organizational security, access, and governance policies while maintaining positive feedback from internal and external users.
Requirements
3–5 years of hands-on experience in data engineering or related roles, with proven experience building, supporting, and operating production-grade data pipelines.
Strong hands-on experience with Databricks, including development, optimization, and operational support of Databricks-based data solutions.
Proven experience with Azure Data Factory (ADF) and Microsoft Fabric for data ingestion, orchestration, and transformation.
Advanced proficiency in SQL for querying, data modeling, performance tuning, and troubleshooting.
Strong programming skills in Python for data transformation, automation, scripting, and pipeline development.
Hands-on experience with Microsoft Azure cloud services related to data storage, compute, networking, and security.
Solid understanding of ETL/ELT patterns, data integration strategies, and real-time or near-real-time data processing concepts.
Experience designing metadata-driven and reusable pipeline components that scale across multiple teams and projects.
Strong understanding of data governance, data quality frameworks, monitoring practices, and automated validation techniques.
Experience supporting high-availability data platforms with SLAs, monitoring, alerting, and incident response processes.
Demonstrated ability to optimize cloud compute usage and manage costs efficiently across Azure, Databricks, and Microsoft Fabric.
Experience using version control systems such as Git and following modern software engineering best practices.
Ability to design secure, scalable, and cost-effective data architectures aligned with business needs.
Strong analytical, problem-solving, and troubleshooting skills with a proactive and ownership-driven mindset.
Excellent communication skills, with the ability to explain complex data concepts to both technical and non-technical stakeholders.
Ability to work independently as well as collaboratively in cross-functional team environments.
High attention to detail with a strong focus on data accuracy, reliability, and documentation quality.
Bachelor’s degree in Computer Science, Engineering, or a related field.
Relevant certifications are required or strongly preferred, including Databricks Certified Data Engineer, Microsoft Certified: Azure Data Engineer Associate, and Microsoft Certified: Fabric Data Engineer Associate.
Benefits
Competitive Salary and comprehensive benefits plan
A dynamic and collaborative work environment with opportunity to work with cutting-edge technology and innovative solutions
Data Platform Expert developing and maintaining data solutions for analysis and reporting at Magna Electronics. Collaborating with various teams to enhance data - driven decision making and insights.
Head of Data Engineering at Envitia overseeing data architecture services for public sector programs. Leading service mobilization with client stakeholders in a hybrid work environment.
Applied AI Health Data Architect - Senior Manager at PwC designing data architecture for healthcare operations. Contributing to innovative data solutions and mentoring teams for operational excellence.
CRM Functional Data Migration Engineer at Data - Core Systems, Inc. managing Dynamics 365 CE solutions and delivering technical expertise in a consulting role.
Data Engineer role for Alberta Securities Commission, focusing on data pipelines and infrastructure. Involves designing, building, and securing data assets for organizational decision - making.
Data Architect supporting UK MOD programme delivering data and integration capabilities for Defence. Join Amentum's growing IT team focused on critical national infrastructure and security projects.
Data Engineer joining Contour Software's development team to handle various big data projects. Focus on cloud data lake architecture and machine learning implementation.
Enterprise Architect Sr position at PNC collaborating with technology stakeholders to build enterprise data architecture. Focus on data models, governance, and guiding data decisions for business strategy.
Data Engineer architecting and maintaining data products using Databricks and Power BI for Nitro's digital transformation efforts. Collaborating to optimize processes and leverage cloud technologies.
Lead Data Engineer driving major transformation in enterprise technology at Capital One. Collaborating with Agile teams to develop, test, and implement innovative full - stack solutions.