Data Engineer at Pipedrive designing and maintaining reliable data pipelines with modern tools and a supportive team. Collaborating across teams to ensure smooth data flow from source to insight.
Responsibilities
Build and maintain data integrations and pipelines (ETL/ELT, APIs, webhooks, event streams, etc.).
Design and implement reusable analytical models (using dbt, PySpark, and other modern tools).
Work with teams across the company to identify data opportunities to drive impact, understand their needs, and help them get the most out of our Data Platform.
Monitor and improve production data solutions, resolving issues proactively and during on-call rotations.
Contribute to frameworks that make data processing more efficient and speed up insights.
Contribute to engineering best practices through code reviews, design discussions, and knowledge sharing.
Requirements
3+ years of engineering experience, ideally with Python.
Proven experience building and maintaining production-grade data pipelines.
Experience with distributed data systems (like Spark or Flink) and cloud platforms (ideally AWS).
Strong SQL and data modelling skills.
Hands-on experience with the modern data stack (dbt, orchestration tools, etc.) and awareness of industry trends.
Experience in at least one business domain, such as Customer, Marketing, or Finance.
A clear communicator who enjoys working with others in an agile setup.
A degree in Computer Science, Math, Statistics, or equivalent practical experience.
Benefits
People-first culture - Be part of a team that values authenticity, champions collaboration, and supports each other—no egos, just teamwork. Work alongside top talent from around the world in an inclusive space where different perspectives fuel our best ideas. Everyone is welcome
Unlock potential – Push boundaries, take ownership, and experiment with the latest technologies as we enhance our AI First Vision. We empower bold ideas that drive real change
We’ve got you – Your well-being matters. Enjoy flexible hours, wellness perks, and SWAG. Think performance-based bonuses, 28 paid leave days, well-being days, compassionate leave, and even pawternal leave—because we take care of ourselves and our people
Grow with us – Whether through mentorship, coaching, or internal mobility, we invest in helping you unlock your potential. Open, honest feedback and clear communication are at our core. We grow together through trust and accountability
Data Engineer II focusing on strategic ingestion product for Travelers. Building data solutions and collaborating across teams to support analytic transformation.
Data Engineer at Tatum focusing on scalable data solutions and blockchain technology. Collaborating with teams to ensure data integrity and manage data infrastructure in a hybrid setup.
Senior Lead Data Engineer at Capital One collaborating with Agile teams and mentoring developers. Leading full - stack development and driving cloud - based solutions for financial empowerment.
Senior Data Engineer responsible for developing data pipelines and collaborating with US business teams. Working at Beghou Consulting, a life sciences company providing advanced analytics and technology solutions.
Data Solutions Architect designing enterprise - scale Azure and Databricks Lakehouse solutions for clinical trials and life sciences data enabling advanced analytics and compliance.
Data Architect at ADEO ensuring interoperability of IT systems through architecture design and data knowledge diffusion. Collaborating with teams to maintain data integrity and quality standards in an international setup.
Consultant, Data Engineer leading end - to - end data solutions and analytics. Collaborating with clients to improve data strategies and deliver actionable insights.
Big Data Engineer developing applications for Synchrony’s Enterprise Data Lake within an Agile scrum team. Collaborating to deliver high - quality data ingestion and maintain data governance standards.
Data Engineer optimizing data pipelines and cloud solutions for GFT Poland. Involves performance tuning, ETL pipelines, and data model development across multiple locations in Poland.