Data Engineer III at Taco Bell enriching customer data assets and building data pipelines for marketing and analytics. Collaborating on cross-functional teams in a hybrid work environment.
Responsibilities
Design and develop highly scalable and extensible data pipelines from internal and external sources using cloud technology such as AWS, Airflow, Redshift, EMR.
Implement new source of truth datasets, in partnership with analytics and business teams.
Collaborate with data product managers, data scientists, data analysts, and data engineers to document requirements and data specifications.
Develop, deploy, and maintain serverless data pipelines using Event Bridge, Kinesis, AWS Lambda, S3, and Glue.
Focus on performance tuning, optimization and scalability to ensure efficiency.
Build out a robust big data ingestion framework with automation, self-heal capabilities and ability to handle data drifts.
Adopt automated and manual test strategies to ensure product quality
Learn and understand how Taco Bell products work and help build end-to-end solutions.
Ensure high operational efficiency and quality of your solutions to meet SLAs.
Actively participate in code reviews and summarize complex data into usable, digestible datasets.
Requirements
Bachelor’s degree in analytics, statistics, engineering, math, economics, computer science, information technology or related discipline
2+ years professional experience in the big data space
2 - 5 years of experience designing and delivering large scale, 24-7, mission-critical data pipelines and features using modern big data architectures
2+ years of hands-on experience in Strong coding skills with Python/Pyspark/Spark and SQL
3+ years of hands-on experience in ETL pipeline such as Informatica, AWS Glue etc.
3+ years of experience working in Redshift or other relevant databases.
Expert knowledge in writing complex SQL and ETL development with experience processing extremely large datasets.
Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
Experience integrating data using streaming technologies such as Kinesis Firehose, Kafka
Experience with AWS Ecosystem, especially Redshift, Athena, DynamoDB, Airflow and S3
Experience integrating data from multiple data sources and file types such as JSON, Parquet and Avro formats.
Experience supporting and working with cross-functional teams in a dynamic environment
Strong quantitative and communication skill
Experience with CI/CD tools like Gitlab, Terraform
Benefits
Hybrid work schedule (onsite expectation Tues, Wed, Thurs) and year-round flex day Friday
Onsite childcare through Bright Horizons
Onsite dining center and game room (yes, there is a Taco Bell inside the building)
Onsite dry cleaning, laundry services, carwash
Onsite gym with fitness classes and personal trainer sessions
Up to 4 weeks of vacation per year plus holidays and time off for volunteering
Generous parental leave for all new parents and adoption assistance program
401(k) with a 6% matching contribution from Yum! Brands with immediate vesting
Comprehensive medical & dental including prescription drug benefits and 100% preventive care
Discounts, free food, swag and… honestly, too many good benefits to name
Data Engineer creating clean, reliable data pipelines for Plenti, a fintech lender. Collaborating with modern tools like AWS and Databricks to enhance data quality and analytics.
Data Platform Specialist overseeing data quality and platform operations at Stackgini. Collaborating with teams to enhance data management solutions and improve system performance.
Staff Data Engineer at PPRO transforming data ecosystem into a self - service platform. Leading technical vision for data engineering and building scalable infrastructures.
SSIS Data Engineer at iKnowHow Group focusing on data migration projects. Involves data modeling, integration, and using T - SQL/SQL alongside SSIS packages.
Principal Data Engineer designing and implementing data solutions that ensure trust and transparency in supply chains. Collaborating with global teams and mentoring fellow engineers in data practices.
Senior Data Engineer role at Dun & Bradstreet focused on data analytics and visualization. Collaborating with teams to optimize data processes and deliver actionable insights.
Senior Data Engineer with AWS expertise leading financial data architecture and scalable solutions. Collaborating in wealth management to enhance data quality and systems.
Data Migration Specialist handling large - scale data migration from legacy to enterprise PLM platform. Analyzing data structures, developing strategies, and ensuring integrity across systems.
Director leading strategy, governance, and delivery of enterprise data platform at Phillips 66. Partnering with AI, Data Science, and business teams to enhance analytics and business systems.