Data Engineer developing and maintaining scalable data pipelines for Kognia Sports Intelligence. Collaborating with engineers and data scientists to support data-driven decision making.
Responsibilities
Design, build and support modern and scalable data pipelines using data processing frameworks, technologies, and platforms
Use best practices around CI/CD, automation, testing, and monitoring of analytics pipelines
Collaborate with software engineers, researchers, data scientists, and stakeholders to understand what data is required and how best to make it available in our platform
Improve our cloud architecture and design new architectures as the need arises
Identify and address possibilities for improvement in areas such as speed of delivery and infrastructure cost reduction
Investigate new technologies and approaches as needed
Willingness to take part in an on-call rotation with your team members
Requirements
Minimum 1 year in a similar position or 3 years in other engineering roles with relevant responsibilities
Fluent with one or more high-level programming languages (Python preferred, but also Ruby, Java, Scala, Go, or similar)
Willing to work mostly in Python but possibility for other stacks as the team may decide on a service-by-service basis
Experience working with SaaS production architectures in GCP (preferred) or AWS
Ability to adapt to a fast-paced, changing agile environment
Interest (if not experience) in DevOps technologies such as Kubernetes
Excellent team player with strong verbal and written communication skills
Comfortable working in English - we’re an international team based in Barcelona, with English as our shared language.
Experience providing data and infrastructure for building and deploying ML models to production (preferred)
Experience working in multi-functional teams with end-to-end responsibility for product development and delivery within your mission (preferred)
Front-end experience in React (preferred)
Interested in being the glue between engineering and research (preferred)
Experience in data quality and governance (preferred)
Specific knowledge of GitLab CI/CD (preferred)
Knowledge of containerization, GitOps, and Linux (preferred)
Kubernetes experience especially is a big plus (preferred)
Senior Data Engineer optimizing data pipelines for AI solutions developed for clients. Working on data architecture and implementing machine learning models for scalable environments.
Data Engineer developing cloud migration and data solutions for retail at Public Group. Engage in multiple projects and create growth opportunities in a hybrid team environment.
Cloud Data Engineer at Regions designing, building, and maintaining data structures and pipelines. Collaborating on data initiatives, ensuring optimal architecture, working closely with technical partners.
Data Engineer in Veepee's Data Factory working on data ingestion pipelines and improving data quality. Collaborative environment utilizing Kubernetes, Python, Java, and modern data architectures.
Data Architect designing and maintaining enterprise data architecture at Envalior. Driving enterprise - wide impact ensuring scalability and reliability of systems, reporting, and AI initiatives.
Data Engineer role at Valmont focused on data analytics and technology for sustainable agricultural practices. Collaborating with cross - functional teams to enhance data management and analytics tools.
Senior Data Engineer at Barclays building and maintaining data pipelines and warehouses. Collaborating with data scientists and ensuring data accuracy, accessibility, and security.
Lead Data Engineer guiding a team in designing scalable data solutions for iKnowHow S.A. Overseeing development of data pipelines while collaborating with cross - functional teams.
Data Engineer at LPL Financial developing Python - based ETL pipelines. Collaborating with cross - functional teams to ensure reliable data delivery and optimizing pipeline performance.