Data Engineer building and operating data pipelines for Qloo's platform. Collaborating with teams on data integrity and accessibility processes.
Responsibilities
Design, develop, and maintain batch data pipelines using Python, Spark (EMR), and AWS Glue, loading data from S3, RDS, and external sources into Hive/Athena tables.
Model datasets in our S3/Hive data lake to support analytics (Hex), API use cases, Elasticsearch indexes, and ML models.
Implement and operate workflows in Airflow (MWAA), including dependency management, scheduling, retries, and alerting via Slack.
Build robust data quality and validation checks (schema validation, freshness/volume checks, anomaly detection) and ensure issues are surfaced quickly with monitoring and alerts.
Optimize jobs for cost and performance (partitioning, file formats, join strategies, proper use of EMR/Glue resources).
Collaborate closely with data scientists, ML engineers, and application engineers to understand data requirements and design schemas and pipelines that serve multiple use cases.
Contribute to internal tooling and shared libraries that make working with our data platform faster, safer, and more consistent.
Document pipelines, datasets, and best practices so the broader team can easily understand and work with our data.
Requirements
Bachelor’s degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience.
Experience with Python and distributed data processing using Spark (PySpark) on EMR or a similar environment.
Hands-on experience with core AWS data services, ideally including:
• S3 (data lake, partitioning, lifecycle management)
• AWS Glue (jobs, crawlers, catalogs)
• EMR or other managed Spark platforms
• Athena/Hive and SQL for querying large datasets
• Relational databases such as RDS (PostgreSQL/MySQL or similar)
Experience building and operating workflows in Airflow (MWAA experience is a plus).
Strong SQL skills and familiarity with data modeling concepts for analytics and APIs.
Solid understanding of data quality practices (testing, validation frameworks, monitoring/observability).
Comfortable working in a collaborative environment, managing multiple projects, and owning systems end-to-end.
Benefits
Competitive salary and benefits package, including health insurance, retirement plan, and paid time off.
The opportunity to shape a modern cloud-based data platform that powers real products and ML experiences.
A collaborative, low-ego work environment where your ideas are valued and your contributions are visible.
Flexible work arrangements (remote and hybrid options) and a healthy respect for work-life balance.
Senior Data Engineer supporting AI - enabled financial compliance initiative with data pipelines and ingestion processes. Collaborating with diverse teams in a mission - critical regulated environment.
Data Architect leading the definition and construction of cloud data architecture for Kyndryl. Participating in significant technological modernization initiatives, focusing on Google Cloud Platform.
Senior Data Engineer driving data intelligence requirements and scalable data solutions for a global consulting firm. Collaborating across functions to enhance Microsoft architecture and analytics capabilities.
Experienced AI Engineer designing and building production - grade agentic AI systems using generative AI and large language models. Collaborating with data engineers, data scientists in a tech - driven company.
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.