Associate Data Engineer designing, building, and maintaining data solutions at Manulife. Collaborating with Data Scientists and Engineers to enhance data workflows and model deployment.
Responsibilities
Designs, builds, and maintains reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis.
Implements data orchestration pipelines, data sourcing, cleansing, augmentation, and quality control processes.
Works with business and technology collaborators to grasp current and future data infrastructure needs.
Designs, builds and maintains scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow; including those specifically tailored for machine learning workflows.
Builds, implements, and upholds current and upcoming data platforms such as data warehouses, repositories for structured and unstructured data.
Collaborates with Data Scientists and Engineers to create features and pre-process data for ML models and move data analysis models into production.
Designs and develops analytical tools, algorithms, data landscape modernization roadmaps, and programs to support Data Engineering activities like writing scripts and automating tasks.
Applies a variety of data interchange formats to ensure data requirements are met and continuously monitors data integrity across the organization.
Integrates machine learning algorithms into current production systems and workflows, taking into account compatibility with other systems, data sources, and APIs.
Builds and advocates for efficient utilization of data querying APIs to ensure seamless access to organizational data sources.
Evaluates, integrates, and manages tools and frameworks within the data engineering ecosystem, ensuring compatibility and efficiency in model development and deployment.
Designs and promotes data versioning and lineage tracking, including transparency and traceability for data used in ML model training and inference.
Requirements
Knowledge of database systems, data lakes, and NoSQL databases
Knowledge of data warehouse concepts and architectures (e.g., Synapse)
Familiarity with data quality and data modelling tools
Proficiency in using version control systems like Git for managing codebase
Experience with Cloud native data services such as PySpark, Scala, Azure Data Factory and Databricks
Practical experience with big data processing frameworks and techniques such as HDFS, MapReduce, Storage formats (Avro, Parquet), Stream processing
Experience with integrating to back-end/legacy environments
Knowledge of AI model deployment in production environments
Experience handling real-time data for AI Applications
Ability to build and deploy Data Ops and ML Ops Pipelines in Cloud-native environments
Benefits
health, dental, mental health, vision insurance
short- and long-term disability
life and AD&D insurance coverage
adoption/surrogacy and wellness benefits
employee/family assistance plans
retirement savings plans (including pension and employer matching contributions)
customizable benefits
paid time off including holidays, vacation, personal days, sick days
Data Engineer managing payment processing and data accuracy while collaborating with financial teams. Building and optimizing data pipelines for transactional data in a hybrid work environment.
Data Engineer building analytical tools for Dry Bulk market data operations at Kpler. Join a team of over 700 experts transforming data into actionable strategies.
Data Engineer developing tools for maintaining data integrity in cargo tracking at Kpler. Collaborating with analysts and engineers to enhance data quality management.
Lead Azure Data Engineer designing and optimizing data ecosystems on Microsoft Cloud. Responsible for building scalable data platforms and pipelines for analytics and reporting.
Data Engineer providing support for IBM DataStage ETL jobs at Callibrity. Collaborating with stakeholders and working to modernize technology solutions in a hybrid work environment.
Cloud Data Engineer implementing tailored solutions for Volkswagen Group data processing. Building ETL/ELT pipelines while collaborating with technical experts.
Data Engineer responsible for building scalable data infrastructure that supports data - driven decisions. Collaborating with team to maintain systems and unlock data value for organizations.
Data Engineer designing and optimizing data pipelines using Databricks and Google Cloud Platform. Collaborating with analysts and scientists to deliver high - quality data products.