Data Engineer II role focusing on developing and maintaining data pipelines for analytics. Collaborating with Data Science and Analytics teams to ensure data quality and reliability.
Responsibilities
Support the development, maintenance and evolution of data pipelines for analytical consumption, with a focus on quality, performance and reliability.
Contribute to the construction and maintenance of analytical datasets, tables and features used by Analytics and Data Science teams.
Assist with integrating data from multiple sources (internal and external), following defined architecture and governance standards.
Participate in automating manual processes and continuously improving data ingestion and transformation workflows.
Develop ETL/ELT routines using SQL and Python under the guidance of more experienced professionals.
Support documentation of pipelines, data models and workflows, ensuring traceability and shared understanding.
Collaborate with data scientists and analysts in building and evolving business variables and metrics.
Work on support and monitoring of existing pipelines, helping to identify and address failures, inconsistencies and data quality issues.
Follow data engineering best practices, code versioning and change control.
Work collaboratively with the team, participating in technical refinements and continuously learning about the business.
Requirements
Bachelor's degree (completed or in progress) in Computer Engineering, Information Systems, Computer Science or related fields.
Knowledge of SQL for querying and manipulating data in relational databases.
Initial or academic experience with Python for data processing and transformation.
Basic knowledge of data pipelines (ETL/ELT) and analytical architecture.
Familiarity with cloud environments (preferably AWS).
Initial exposure to or familiarity with Amazon Redshift or other analytical databases.
Basic knowledge of code versioning with Git.
Ability to understand problems, seek help, learn from feedback and grow technically.
Good communication skills to interact with the team and understand business requirements.
Benefits
Meal and/or food allowance.
Health and dental insurance.
Life insurance.
Partnerships with TotalPass and ZenKlub.
Extended maternity and paternity leave.
Childcare assistance.
Up to 50% discounts on postgraduate and MBA programs from major institutions such as FIA, FAAP and PUCRS.
No strict dress code: wear what makes you comfortable.
Senior Data Engineer at Capgemini designing and optimizing scalable data architectures on Databricks and GCP. Collaborating across teams to transform business needs into reliable technical solutions.
Data Engineer transforming legacy on - premises systems to cloud - native architectures for advanced data analytics. Collaborating with teams to build efficient data solutions using Python and AWS.
Data Engineering Academy focused on Snowflake and Databricks for professionals interested in expanding their technical capabilities. Fully remote with future office work in Monterrey or Saltillo after completion.
Senior Data Engineer at Intent HQ designing and scaling data platforms. Building high - impact intelligence from millions of customer insights with a focus on performance and reliability.
SAP Data Engineer supporting MERKUR GROUP's evolution into a data - driven company. Responsible for data integration, modeling, and collaboration with various departments in Group Finance.
Data Engineer at Booz Allen Hamilton organizing data and developing advanced technology solutions. Leading data engineering activities for mission - driven projects and mentoring multidisciplinary teams.
Senior Data Engineer at Bristol Myers Squibb developing scalable data pipelines for foundational products. Collaborating with data scientists and IT professionals to ensure data quality and accessibility.
Senior Data Architecture Specialist designing and maintaining data integration solutions for Morgan Stanley. Involved in building data architecture and optimizing data storage using various technologies.
Lead Data Engineer responsible for building and maintaining the central HR data lake. Collaborating with analysts and business stakeholders for data - driven decision making.