Data Engineer Internship at OpenTable focusing on scalable data pipelines and data infrastructure development. Collaborate with global teams and receive mentorship from experienced engineers in Toronto.
Responsibilities
Collaborate globally with our Product and Data teams while working day-to-day alongside and learning from our Toronto-based data engineers in a supportive, hybrid environment
Build and evolve scalable data pipelines (ETL/ELT) that ensure reliable data flow across systems and teams
Work on data modeling, transformation, and validation to maintain high-quality datasets that power analytics and machine learning
Help optimize and modernize our data infrastructure, improving performance and reliability across platforms
Partner with engineers, analysts, and data scientists to identify opportunities for automation and better data accessibility
Grow through mentorship by receiving guidance and coaching from experienced data engineers, helping you sharpen your technical skills and accelerate your career
Requirements
Currently pursuing an undergraduate degree in Computer Science, Data Science, Engineering, Mathematics, or a related technical field. Expected to graduate by Spring 2027.
Strong programming skills: Solid computer science fundamentals with experience in Python and/or SQL
Interest in data infrastructure: Excited to build and maintain data pipelines, models, and systems
Database familiarity: Understanding of relational databases and data processing tools; experience with cloud platforms (AWS, GCP, or Azure) is a plus
Understanding of data modeling and ETL concepts: Coursework or project experience a strong advantage
Analytical mindset: You enjoy solving complex data problems and thinking critically about performance and trade-offs
Curiosity and drive: Eager to learn new technologies like Spark, Airflow, or modern data warehouses
Collaboration skills: You communicate clearly and work effectively with cross-functional partners
Data Engineer role focused on creating and monitoring data pipelines in an innovative energy company. Collaborate with IT and departments to ensure quality data availability in a hybrid work environment.
SQL Migration Data Engineer at Auxo Solutions focusing on Azure SQL/Fabric Lakehouse migrations and building data pipelines. Collaborating on technical designs and data governance for modernization initiatives.
Data Engineer developing cloud solutions and software tools on Microsoft Azure big data platform. Collaborating with various teams for data analysis and visualization in healthcare.
Boomi Integration Architect designing and leading integration solutions for data warehouses. Collaborating with cross - functional teams to implement scalable integration patterns using Boomi technologies.
Seeking a Boomi Integration Architect specializing in Data Warehouse and Master Data Hub implementations. Responsible for designing high - performance integration solutions across enterprise platforms.
Principal Data Engineer at Serko enhancing global travel tech through data - driven solutions. Collaborating across teams in Bengaluru to drive innovative engineering and best practices.
Data Engineer at Keyrus responsible for building and optimizing data pipelines for major projects. Contributing to data solutions and ensuring data quality in a growing team.
Data Architect designing and implementing scalable data architectures for Keyrus in Bordeaux. Leading client transitions and contributing to the tech ecosystem with innovative data solutions.