Staff Data Engineer needed to architect data systems at Dialectica. Collaborate on complex infrastructures and mentor engineering teams, focusing on high-quality data pipelines.
Responsibilities
Architect End-to-End Systems: Go beyond traditional data pipelining. Design and oversee the implementation of complex, distributed data architectures that span ingestion, processing, storage, and serving layers on AWS.
Bridge App & Data: Collaborate deeply with Backend and Frontend engineering teams during the design phase of new features. You will influence database schema design in microservices and define event emission standards to ensure data is usable downstream before code is even written.
API & Consumption Layer Design: Don't just dump data into a warehouse. Architect high-performance data access layers (e.g., GraphQL APIs, low-latency lookups via Redis) that allow our product teams to consume processed data easily and efficiently in the UI.
Elevate Engineering Standards: Define and enforce best practices for Infrastructure as Code (Terraform), CI/CD for data products, data testing, and observability capabilities across the entire stack.
Technical Leadership & Mentorship: Serve as a technical beacon for the data organization. Mentor Senior Data Engineers, conduct high-level code reviews, and drive pragmatic technical decision-making that balances immediate business needs with long-term scalability.
Solve the "Hardest" Problems: Take ownership of the most complex, intractable technical challenges related to data consistency, real-time processing, and cross-system integrations.
Requirements
8+ years of combined experience in Software Engineering and Data Engineering, with at least 3 years operating at a Senior or Lead level.
True "Full Stack" Exposure: You must have a background in core software engineering beyond just SQL and Python scripts. You should be comfortable navigating backend codebases (e.g., Node.js, Go, or Python application code) to understand how data is generated.
Mastery of Modern Data Stacks: Deep expertise in Python, advanced SQL, and architecting Production Data Lakes/Warehouses on cloud platforms (AWS preferred).
Architectural Expertise: Strong background in distributed systems design, event-driven architecture (Kafka, Kinesis, SNS/SQS), and microservices patterns. You understand the trade-offs between consistency and availability (CAP theorem) in real-world scenarios.
Infrastructure as Code: Deep hands-on experience with Kubernetes, Docker, and Terraform. You don't just use infrastructure; you design it.
Advanced Tooling: Expert-level knowledge of orchestration tools (Airflow, Dagster) and modern transformation tools like DBT.
A Product Mindset: You don't just serve data; you understand the business value of what you are building and how it impacts the end-user experience.
Fluency in English is a must.
Benefits
Competitive salary pegged to international standards with performance incentives.
Premium Prepaid Medicine (Medicina Prepagada) coverage.
Flexible Hybrid or Remote work model based in Bogota.
Extra personal/flex days and paid volunteer days.
Learning and development budget (Udemy, conferences, certifications).
Entrepreneurial culture and amazing coworkers across 3 continents.
Company-sponsored team-bonding events and wellness activities.
Join Luminor as a Mid/Senior Data Engineer focusing on data engineering within risk and finance reporting. Design and optimize data systems supporting evolving regulatory requirements in a dynamic banking environment.
Join Luminor as a Mid/Senior Data Engineer focusing on data engineering within risk and finance reporting. Collaborate to design scalable data architectures and support regulatory requirements while enhancing data integration processes.
Senior GCP Data Engineer designing, building, and optimizing data platforms on GCP. Collaborating with product teams to deliver high - performance data solutions.
Snowflake Data Engineer optimizing data pipelines using Snowflake for a global life science company. Collaborate with cross - functional teams for data solutions and performance improvements in Madrid.
Data Engineer designing and implementing big data solutions at DATAIS. Collaborating with clients to deliver actionable business insights and innovative data products in a hybrid environment.
SAP Data Engineer supporting MERKUR GROUP in becoming a data - driven company. Responsible for data integration, ETL processes, and collaboration with various departments.
Big Data Engineer designing and managing data applications on Google Cloud. Join Vodafone’s global tech team to optimize data ingestion and processing for machine learning.
Data Engineer building and maintaining data pipelines for Farfetch’s data platform. Collaborating with the Data team to improve data reliability and architecture in Porto.
Senior Data Engineer at Razer leading initiatives in data engineering and AI infrastructure. Collaborating across teams to develop robust data solutions and enhancing AI/ML projects.
Data Engineering Intern working with data as Jua builds AI for climate and geospatial datasets. Contributing to the integration and validation of new datasets with experienced mentors.