Senior Data Engineer optimizing data models and data pipelines for Aroundhome. Collaborating across teams to enhance the data platform for effective decision-making.
Responsibilities
Design necessary data models and transformations to curate raw data.
Develop, optimize and maintain existing data models, pipelines, and transformations to support analytics, reporting, and AI use cases such as but not limited to curating, transforming, annotating and modeling data.
Architect and contribute in implementing a scalable, modern data platform, including data lakehouse or warehouse, to support real-time/near-real-time data flows from Kafka to downstream consumers.
Optimize ETL/ELT pipelines using tools like DBT, Spark, or Airflow, bridging upstream (e.g. Debezium, MSK) and downstream processes.
Build and optimize real-time data pipelines using Kafka, Spark, and Delta Live Tables.
Support the team lead in establishing and enforcing data governance frameworks, including data lineage, quality standards, catalogue, metadata management, SSOT for business glossaries/CBC terms, and policies to ensure reliable reporting.
Ensure the existence of, or adaptation to, full Data Life Cycle Management (DLCM) and end-to-end testing.
Collaborate with the team to integrate AI/ML capabilities, such as feature engineering and model serving, to accelerate data products for market penetration and operational efficiency, as well as operationalizing ML models and integrate AI into business processes.
Mentor the team on best practices, modern tools (e.g., Databricks, Snowflake, AI adaptation and integrations like Cursor/CodeRabbit), and cloud-native scalability.
Collaborate with Product Analytics, domain teams, and business to deliver data solutions that drive value and are aligned with business needs.
Requirements
Master's degree in Computer Science, Data Engineering, or related field (or equivalent experience)
10+ years of experience in data engineering, with 5+ years in senior roles focused on modern architectures.
Excellent communication and collaboration skills, the ability to drive change and influence stakeholders, and a passion for mentoring, coaching, and sharing knowledge.
Proven expertise in designing, developing & maintaining data lakehouses/DWH (e.g., Databricks Delta Lake, Snowflake) and transformations (e.g., DBT, SQL/Python/Spark).
Strong experience with cloud platforms such as AWS services (S3, Athena, MSK/Kafka, Terraform) and real-time streaming (e.g., Kafka, Spark Structured Streaming, Flink).
Hands-on knowledge of data governance tools (e.g., Unity Catalog, Collibra) for lineage, quality, catalogs, and SSOT.
Familiarity in AI/ML pipelines and MLOps (e.g., MLflow, feature stores) and complex system integration within modern data technologies.
Proficiency in CI/CD for data, and tools like Git, Airflow, or dbt Cloud.
Experience with large-scale data modeling (DataVault, dimensional, schema-on-read) and optimizing for self-service analytics.
Benefits
Hybrid Work // Stay flexible: Decide freely whether you want to work from our office in Berlin, from home throughout Germany, or fully remote from Portugal. Additionally, we offer you the possibility to work up to 30 days per year from selected EU countries.
Room to grow // Stay curious: In regular/annual feedback meetings, you design your own personal development path together with your team lead. In addition, we offer monthly Tech Talks, an annual training budget, free access to LinkedIn Learning as well as great workshops to seek new input and specific training for your personal development.
Mental & physical health // Stay healthy & active: We care about your well-being and support your health with a discounted membership at Urban Sports Club. In addition, our cooperation partner “Fürstenberg Institute” is available to you and your family members with advice and support when it comes to mental health & coaching.
Taking responsibility // We #care: We care for our planet and the people who are living on it. To make our world a little bit better, our ESG Team (Environmental, Social & Governance) promotes our awareness for environmentally friendly behavior. In addition, we work with JobRad, offer a discounted BVG ticket, and actively support social projects in Berlin at our annual Social Day to give something back.
Teamspirit // Stay connected: What drives us is our sense of community. That's why we celebrate our Company Day once a month in our office at Potsdamer Platz, where everyone gets together in person. In addition to a comprehensive update of our C-Board, the main focus is on togetherness and the exchange of ideas over lunch and dinner together.
Consultant supporting SAP Data Migration solutions at Deloitte in Southeast Asia. Engaging clients and delivering high - quality data migration services through project implementation.
Senior Full - Stack BI Architect designing and implementing domain - driven data solutions for Proactive Technology Management. Leveraging expertise in data modeling and cloud technologies to provide actionable business insights.
Data Engineer role at DyFlex Solutions, designing enterprise - scale data solutions for client projects using cloud and modern frameworks. Hands - on technical delivery combined with client - facing consulting in a hybrid work setup.
Data Engineer building and maintaining data ingestion pipelines for a leading AI startup. Ensuring data quality and collaborating with engineering teams to support product development.
Data Engineer focusing on SQL performance optimization and data pipelines development for MIC Global's insurtech solutions. Collaborating with cross - functional teams to enhance data quality and analytics.
Senior Data Engineer developing cloud - based data solutions for commercial real estate. Collaborating in an Agile environment to enhance data integrity and availability across platforms.
Data Engineer at Vistra designing and maintaining data pipelines for analytics. Collaborating with teams and optimizing data integration using modern cloud technologies.
Intern in Data Analysis and Data Engineering at a startup in Köln, focusing on software engineering and data analytics. Participate in the launch of an interactive sports app.
Snowflake Data Engineer responsible for data pipelines and warehouses for enterprise analytics at Liberty Coca - Cola. Collaborating across business functions to ensure high data quality and performance.
Full - Stack Data Engineer designing and optimizing complex data solutions for automotive content. Collaborating with teams to enhance user experience across MOTOR's product lines.