About the role

  • Data Engineer developing scalable data pipelines for RunBuggy's automotive logistics platform. Collaborate with cross-functional teams to unlock powerful insights and optimize data infrastructure.

Responsibilities

  • Design, develop, and maintain scalable data pipelines and systems.
  • Independently create and own new data capture/ETL’s for the entire stack and ensure data quality.
  • Collaborate with data scientists, engineers, business leaders, and other stakeholders to understand data requirements and provide the necessary infrastructure.
  • Create and contribute to frameworks that improve the effectiveness of logging data, triage issues, and resolution.
  • Define and manage Service Level Agreements (SLA) for all data sets in allocated areas of ownership.
  • Lead data engineering projects and determine the appropriate tools and libraries for each task.
  • Implement data security and privacy best practices.
  • Create and maintain technical documentation for data engineering processes.
  • Work with cloud-based data storage and processing solutions (for example, Docker and Kubernetes).
  • Build out and support a DAG orchestration cluster framework.
  • Migrate workflows from batch processes to the DAG cluster via concurrent data flows.
  • Data pipeline maintenance, including debugging code, monitoring, and incident response.
  • Collaborate with engineering to enforce data collection and data contracts for API’s, databases, etc.
  • Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field required; master’s degree preferred.
  • 5+ years of experience in data engineering.
  • Proficiency in Python and experience with data engineering libraries (e.g., Pandas).
  • Experience with ETL processes and tools.
  • Strong knowledge of relational and non-relational databases.
  • Experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Excellent communication skills.
  • Ability to work independently and lead projects.
  • Experience with data warehousing solutions.
  • Familiarity with data visualization tools (e.g., Tableau).
  • Experience with building and managing DAG clusters (e.g. Airflow, Prefect).
  • Ability to work with the following: JavaScript, Node.js, AngularJS, Java, and Java Spring Boot.
  • Knowledge of machine learning and data science workflows.
  • Ability to handle a variety of duties in a fast-paced environment.
  • Excellent organizational skills, along with professionalism and diplomacy with internal and external customers/vendors.
  • Ability to prioritize tasks and manage time.
  • Ability to work under tight deadlines.

Benefits

  • Highly competitive medical, dental, vision, Life w/ AD&D, Short-Term Disability insurance, Long-Term Disability insurance, pet insurance, identity theft protection, and a 401(k) retirement savings plan.
  • Employee wellness program.
  • Employee rewards, discounts, and recognition programs.
  • Generous company-paid holidays (12 per year), vacation, and sick time.
  • Paid paternity/maternity leave.
  • Monthly connectivity/home office stipend if working from home 5 days a week.
  • A supportive and positive space for you to grow and expand your career.

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

$140,000 - $170,000 per year

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job