Data Engineer III at Taco Bell enriching customer data assets and building data pipelines for marketing and analytics. Collaborating on cross-functional teams in a hybrid work environment.
Responsibilities
Design and develop highly scalable and extensible data pipelines from internal and external sources using cloud technology such as AWS, Airflow, Redshift, EMR.
Implement new source of truth datasets, in partnership with analytics and business teams.
Collaborate with data product managers, data scientists, data analysts, and data engineers to document requirements and data specifications.
Develop, deploy, and maintain serverless data pipelines using Event Bridge, Kinesis, AWS Lambda, S3, and Glue.
Focus on performance tuning, optimization and scalability to ensure efficiency.
Build out a robust big data ingestion framework with automation, self-heal capabilities and ability to handle data drifts.
Adopt automated and manual test strategies to ensure product quality
Learn and understand how Taco Bell products work and help build end-to-end solutions.
Ensure high operational efficiency and quality of your solutions to meet SLAs.
Actively participate in code reviews and summarize complex data into usable, digestible datasets.
Requirements
Bachelor’s degree in analytics, statistics, engineering, math, economics, computer science, information technology or related discipline
2+ years professional experience in the big data space
2 - 5 years of experience designing and delivering large scale, 24-7, mission-critical data pipelines and features using modern big data architectures
2+ years of hands-on experience in Strong coding skills with Python/Pyspark/Spark and SQL
3+ years of hands-on experience in ETL pipeline such as Informatica, AWS Glue etc.
3+ years of experience working in Redshift or other relevant databases.
Expert knowledge in writing complex SQL and ETL development with experience processing extremely large datasets.
Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
Experience integrating data using streaming technologies such as Kinesis Firehose, Kafka
Experience with AWS Ecosystem, especially Redshift, Athena, DynamoDB, Airflow and S3
Experience integrating data from multiple data sources and file types such as JSON, Parquet and Avro formats.
Experience supporting and working with cross-functional teams in a dynamic environment
Strong quantitative and communication skill
Experience with CI/CD tools like Gitlab, Terraform
Benefits
Hybrid work schedule (onsite expectation Tues, Wed, Thurs) and year-round flex day Friday
Onsite childcare through Bright Horizons
Onsite dining center and game room (yes, there is a Taco Bell inside the building)
Onsite dry cleaning, laundry services, carwash
Onsite gym with fitness classes and personal trainer sessions
Up to 4 weeks of vacation per year plus holidays and time off for volunteering
Generous parental leave for all new parents and adoption assistance program
401(k) with a 6% matching contribution from Yum! Brands with immediate vesting
Comprehensive medical & dental including prescription drug benefits and 100% preventive care
Discounts, free food, swag and… honestly, too many good benefits to name
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.
IT Data Engineering Co‑Op at BlueRock Therapeutics supports development of scientific data systems. Collaboration on data workflows and foundational AWS data engineering tasks.
Data Engineer I building and operationalizing complex data solutions for Travelers' analytics using Databricks. Collaborating within teams to educate end users and support data governance.
Data Engineer shaping modern data architecture to drive golf’s digital transformation. Collaborating with teams to enhance data pipelines and insights for customer engagement and revenue growth.
Staff Data Engineer overseeing complex data systems for CITY Furniture. Responsible for architecting and optimizing data ecosystems in a hybrid work environment.