Data Engineer at Kyndryl designing and maintaining data pipelines using AWS and Python. Optimizing ingestion, transformation workflows, and cloud solutions for large-scale data environments.
Responsibilities
designing, developing, and maintaining large‑scale data pipelines using Python, PySpark, and SQL
building and optimizing data ingestion, transformation, and processing workflows on AWS using services such as Lambda, Glue, EMR, S3, Athena, DynamoDB, Step Functions, MWAA, EventBridge, SNS/SQS, and Kinesis
implementing secure, scalable, and reliable cloud solutions aligned with AWS best practices through IAM, CloudWatch, CloudTrail, and Secrets Manager
working extensively with modern table formats like Apache Iceberg, Delta Lake, or Apache Hudi
developing and tuning advanced SQL queries, including Teradata SQL
contributing to scalable, reliable, and cost‑efficient Data Lake and Lakehouse architectures
designing event‑driven and serverless data solutions, including real‑time streaming pipelines
applying strong data modeling principles such as partitioning strategies, schema evolution handling, and metadata management
building and maintaining CI/CD pipelines using GitLab or Jenkins
automating infrastructure provisioning with Terraform
ensuring code quality through unit testing, code reviews, and adherence to engineering best practices
Requirements
8+ years of strong hands‑on experience in Python, SQL, and PySpark
Solid understanding of distributed data processing frameworks and cloud‑native architectures
Proven experience working within the AWS data ecosystem
Experience delivering solutions in large‑scale, fast‑paced environments with a strong focus on automation and data quality
Good working knowledge of Data Warehousing (DWH) concepts
Data Lakes / Lakehouse architectures
Ability to design and support production‑grade data pipelines end‑to‑end
Basic hands‑on experience with Terraform and CI/CD pipelines
Familiarity with Delta Lake / Iceberg / Hudi
Exposure to real‑time streaming architectures
AWS Data Engineer or Advanced‑level AWS certifications (preferred)
Benefits
flexible, supportive environment
well-being prioritized
personalized development goals aligned with your ambitions
Senior Data Engineer responsible for growing customer - defined targeting calculations and developing key/value databases for real - time data processing.
Data Engineer developing and maintaining the Data Lakehouse platform using Microsoft Azure technology stack at RBC. Collaborating with business and technology teams to enhance data ingestion and modeling processes.
Data Engineer focused on creating a data platform for automated cyber insurance. Collaborating with stakeholders to deliver data processing capabilities and governance.
Data Engineer designing and developing data solutions using AI and machine learning for marketing applications. Collaborating in teams to create impactful data - driven solutions for clients across various industries.
Data Engineer building and maintaining data platform solutions for clients at Dignify. Designing, developing, and optimising data models and pipelines with a focus on Google BigQuery.
Senior Data Engineer developing scalable data solutions for electric vehicle market at Kempower. Collaborating with cross - functional teams to enhance data engineering processes.
Data Engineer responsible for developing research analytic data infrastructure at Sutter Health. Involves managing data quality, pipelines, and compliance with healthcare regulations.
Senior Data Engineer designing impactful data solutions for clients at Simple Machines. Collaborating with engineers to build data platforms and pipelines in a hybrid workplace.
Journeyman Data Engineer at Leidos supporting DoD enterprise data and analytics. Develop and maintain data pipelines and data models with a focus on national security outcomes.
Senior Data Engineer at Corient designing and maintaining data pipelines for wealth management. Overseeing sprint planning and supporting cross - functional data initiatives to ensure data integrity.