Data Engineer designing automated data pipelines for a leading recycling and sustainability company. Collaborating with IT and operations to streamline workflows and increase efficiency.
Responsibilities
Build intelligent data pipelines and robotic process automations that link business systems and streamline workflows.
Work hands-on with technologies like Python, SQL Server, and modern APIs.
Integrate tools such as OpenAI, Anthropic, and Azure ML to automate decision-making.
Collaborate closely with IT and operations teams to design and deliver smart, data-powered solutions.
Apply performance tuning and best practices for database and process efficiency.
Support data governance and version control processes to ensure consistency and transparency.
Requirements
Bachelor’s degree or equivalent experience in data engineering, computer science, or software development.
Must have personally owned an automated pipeline end-to-end (design → build → deploy → maintain).
Minimum 3 years hands-on experience building production data pipelines using Python and SQL Server. Contract, academic, bootcamp, or coursework experience does not qualify.
Intermediate to advanced Python development skills, particularly for data and API automation.
Experience working with RESTful APIs and JSON data structures.
Familiarity with AI/ML API services (OpenAI, Anthropic, Azure ML, etc.) and their integration into data workflows.
Experience with modern data stack components such as Fivetran, dbt, or similar tools preferred.
Knowledge of SQL Server performance tuning and query optimization.
Familiarity with Git and CI/CD workflows for data pipeline deployment.
Bonus: Experience deploying or maintaining RPA or AI automation solutions.
Benefits
Collaborative, innovation-driven environment
Encourage fresh ideas and experimentation with new technologies
Ownership of solutions from concept to completion
Contribution to sustainability and continuous improvement
Data Engineer role in São Paulo developing data pipelines and maintaining data architecture. Collaborative role with a focus on data governance and analytics.
Data Warehouse Engineer developing end - to - end solutions for projects at TD Bank. Leading technical design and delivery of effective tech solutions.
Building, optimizing and supporting Beghou’s AI data platform in cloud environments using Databricks and Python. Requires hands - on development and adherence to software engineering best practices.
People Manager leading data engineering teams for Kramp, fostering growth in a tech environment. Overseeing performance and collaboration within a dynamic digital landscape.
Senior Data Engineer for Semrush, developing scalable data pipelines and optimizing data systems. Collaborating with teams for analytics and mentoring junior engineers in best practices.
Intern working on data engineering tasks for machine learning in the automotive field. Collaborating with Data Engineers and learning about data management tools.
Data Engineer developing and maintaining ETL processes using Azure Data Factory and Snowflake. Collaborating with teams to ensure reliable data for analytical purposes.
Senior Data Engineer at a fast - growing MNC designing scalable data pipelines and infrastructure for AI. Collaborate with teams while building solutions for analytics and energy optimization.
Senior Principal Engineer managing data quality framework implementation at Mercer. Collaborating with international stakeholders and ensuring robust data governance practices.
Data Engineer designing and implementing architectures in cloud environments. Collaborating with teams to define technical standards and achieve business goals.