Data Engineer optimizing data pipelines and cloud solutions for GFT Poland. Involves performance tuning, ETL pipelines, and data model development across multiple locations in Poland.
Responsibilities
Your responsibilities will include performance tuning and optimization of existing solutions, building and maintaining ETL pipelines, as well as testing and documenting current data flows
You will also be involved in implementing tools and processes to support data-related projects and promoting the best development standards across the team
Design, build, test and deploy Cloud and on-premise data models and transformations in Cloud Native or dedicated toolset
Optimize data views for specific visualization use cases making use of schema design partitions, indexes, down-sampling, archiving, etc. to manage trade-offs such as performance and flexibility
Review and refine, interpret and implement business and technical requirements
Ensure you are part of the on-going productivity and priorities by refining User Stories, Epics and Backlogs in Jira
Onboarding new data sources, design, build, test and deploy Cloud data ingest, pipelines, warehouse and data models/products
Requirements
At least 4-5 years of commercial experience as a Data Engineer
Strong Python and PySpark skills
Strong hands-on experience with SQL and query optimization
Experience with ETL/ELT pipelines development, testing, and management
Strong experience with Hadoop
Understanding of key concepts around Data Warehousing, Data Lakes and Data Lakehouses
Experience with Cloud Data engineering toolset, preferably GCP
Experience with Java/Scala (Nice to have)
Benefits
Hybrid work in one of our locations: Lodz, Poznan, Krakow, Warszawa, Wroclaw (2 office days per week)
Benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.)
On-line training and certifications fit for career path
Access to e-learning platform Mindgram - a holistic mental health and wellbeing platform
Work From Anywhere (WFA) - the temporary option to work remotely outside of Poland for up to 140 days per year (including Italy, Spain, the UK, Germany, Portugal, and Bulgaria)
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.
IT Data Engineering Co‑Op at BlueRock Therapeutics supports development of scientific data systems. Collaboration on data workflows and foundational AWS data engineering tasks.
Data Engineer I building and operationalizing complex data solutions for Travelers' analytics using Databricks. Collaborating within teams to educate end users and support data governance.
Data Engineer shaping modern data architecture to drive golf’s digital transformation. Collaborating with teams to enhance data pipelines and insights for customer engagement and revenue growth.
Staff Data Engineer overseeing complex data systems for CITY Furniture. Responsible for architecting and optimizing data ecosystems in a hybrid work environment.