Data Engineer developing scalable data pipelines for RunBuggy's automotive logistics platform. Collaborate with cross-functional teams to unlock powerful insights and optimize data infrastructure.
Responsibilities
Design, develop, and maintain scalable data pipelines and systems.
Independently create and own new data capture/ETL’s for the entire stack and ensure data quality.
Collaborate with data scientists, engineers, business leaders, and other stakeholders to understand data requirements and provide the necessary infrastructure.
Create and contribute to frameworks that improve the effectiveness of logging data, triage issues, and resolution.
Define and manage Service Level Agreements (SLA) for all data sets in allocated areas of ownership.
Lead data engineering projects and determine the appropriate tools and libraries for each task.
Implement data security and privacy best practices.
Create and maintain technical documentation for data engineering processes.
Work with cloud-based data storage and processing solutions (for example, Docker and Kubernetes).
Build out and support a DAG orchestration cluster framework.
Migrate workflows from batch processes to the DAG cluster via concurrent data flows.
Data pipeline maintenance, including debugging code, monitoring, and incident response.
Collaborate with engineering to enforce data collection and data contracts for API’s, databases, etc.
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements
Bachelor's degree in Computer Science, Engineering, or a related field required; master’s degree preferred.
5+ years of experience in data engineering.
Proficiency in Python and experience with data engineering libraries (e.g., Pandas).
Experience with ETL processes and tools.
Strong knowledge of relational and non-relational databases.
Experience with cloud platforms (e.g., AWS, GCP, Azure).
Excellent communication skills.
Ability to work independently and lead projects.
Experience with data warehousing solutions.
Familiarity with data visualization tools (e.g., Tableau).
Experience with building and managing DAG clusters (e.g. Airflow, Prefect).
Ability to work with the following: JavaScript, Node.js, AngularJS, Java, and Java Spring Boot.
Knowledge of machine learning and data science workflows.
Ability to handle a variety of duties in a fast-paced environment.
Excellent organizational skills, along with professionalism and diplomacy with internal and external customers/vendors.
Ability to prioritize tasks and manage time.
Ability to work under tight deadlines.
Benefits
Highly competitive medical, dental, vision, Life w/ AD&D, Short-Term Disability insurance, Long-Term Disability insurance, pet insurance, identity theft protection, and a 401(k) retirement savings plan.
Employee wellness program.
Employee rewards, discounts, and recognition programs.
Generous company-paid holidays (12 per year), vacation, and sick time.
Paid paternity/maternity leave.
Monthly connectivity/home office stipend if working from home 5 days a week.
A supportive and positive space for you to grow and expand your career.
Leading development teams for Data Warehouse and Application Integration solutions in a hybrid work environment. Engaging with business stakeholders and ensuring quality delivery for enterprise data solutions.
Data Engineer I managing data engineering functions like modeling, profiling, and ETL using Azure and Databricks. Requires a degree and 2+ years of relevant experience to provide effective data solutions.
Senior BigQuery Engineer designing high - performance data solutions for Deutsche Bank. Collaborating in agile environment for data quality and cloud migration.
IT Data Engineer passionate about data solutions supporting digital transformation at Sizewell C. Join a collaborative team working on building data pipelines and platforms for a major infrastructure project.
Cloud & Data Engineer working with large datasets in innovative projects for Marketing Technology team. Focus on cloud platforms and development of scalable systems for digital marketing support.
Data Engineer responsible for building and maintaining data solutions using Microsoft Fabric. Working within a consultancy environment to meet client expectations across various sectors.
Data Engineering Lead at Fetch owning end - to - end data platform for AI, pricing, and operations. Collaborate with teams to enable real - time data - driven decisions and trustworthiness.
Data Engineer responsible for building ELT/ETL pipelines and supporting data governance practices at Daniels Health. Joining a mission - driven company innovating in healthcare waste management across multiple countries.
Data Engineer designing and optimizing Azure - based data platforms for enterprise analytics. Developing scalable data pipelines and enabling insights through Power BI and Azure Synapse Analytics.