Data Engineer developing scalable data pipelines for RunBuggy's automotive logistics platform. Collaborate with cross-functional teams to unlock powerful insights and optimize data infrastructure.
Responsibilities
Design, develop, and maintain scalable data pipelines and systems.
Independently create and own new data capture/ETL’s for the entire stack and ensure data quality.
Collaborate with data scientists, engineers, business leaders, and other stakeholders to understand data requirements and provide the necessary infrastructure.
Create and contribute to frameworks that improve the effectiveness of logging data, triage issues, and resolution.
Define and manage Service Level Agreements (SLA) for all data sets in allocated areas of ownership.
Lead data engineering projects and determine the appropriate tools and libraries for each task.
Implement data security and privacy best practices.
Create and maintain technical documentation for data engineering processes.
Work with cloud-based data storage and processing solutions (for example, Docker and Kubernetes).
Build out and support a DAG orchestration cluster framework.
Migrate workflows from batch processes to the DAG cluster via concurrent data flows.
Data pipeline maintenance, including debugging code, monitoring, and incident response.
Collaborate with engineering to enforce data collection and data contracts for API’s, databases, etc.
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements
Bachelor's degree in Computer Science, Engineering, or a related field required; master’s degree preferred.
5+ years of experience in data engineering.
Proficiency in Python and experience with data engineering libraries (e.g., Pandas).
Experience with ETL processes and tools.
Strong knowledge of relational and non-relational databases.
Experience with cloud platforms (e.g., AWS, GCP, Azure).
Excellent communication skills.
Ability to work independently and lead projects.
Experience with data warehousing solutions.
Familiarity with data visualization tools (e.g., Tableau).
Experience with building and managing DAG clusters (e.g. Airflow, Prefect).
Ability to work with the following: JavaScript, Node.js, AngularJS, Java, and Java Spring Boot.
Knowledge of machine learning and data science workflows.
Ability to handle a variety of duties in a fast-paced environment.
Excellent organizational skills, along with professionalism and diplomacy with internal and external customers/vendors.
Ability to prioritize tasks and manage time.
Ability to work under tight deadlines.
Benefits
Highly competitive medical, dental, vision, Life w/ AD&D, Short-Term Disability insurance, Long-Term Disability insurance, pet insurance, identity theft protection, and a 401(k) retirement savings plan.
Employee wellness program.
Employee rewards, discounts, and recognition programs.
Generous company-paid holidays (12 per year), vacation, and sick time.
Paid paternity/maternity leave.
Monthly connectivity/home office stipend if working from home 5 days a week.
A supportive and positive space for you to grow and expand your career.
Data Engineer at Onepoint developing cloud - native architectures and scalable data solutions. Collaborating on data processing pipelines and guiding clients on best practices.
Data Engineer at Onepoint contributing to client growth through cloud technologies. Involves data pipeline development, auditing cloud configurations, and supporting data science practices.
Senior Data Engineer / Data Scientist developing AI - driven solutions at GFT. Focus on scalable data pipelines, AI/ML models, and LLM technologies while collaborating with UK banking clients.
AI & BI Data Engineer developing analytics solutions to enhance genomic platforms at Corteva Agriscience. Focused on data pipelines, AI/ML model operation, and decision - making analytics.
Senior Data Engineer optimizing data pipelines for AI solutions developed for clients. Working on data architecture and implementing machine learning models for scalable environments.
Data Engineer developing cloud migration and data solutions for retail at Public Group. Engage in multiple projects and create growth opportunities in a hybrid team environment.
Cloud Data Engineer at Regions designing, building, and maintaining data structures and pipelines. Collaborating on data initiatives, ensuring optimal architecture, working closely with technical partners.
Data Engineer in Veepee's Data Factory working on data ingestion pipelines and improving data quality. Collaborative environment utilizing Kubernetes, Python, Java, and modern data architectures.
Data Architect designing and maintaining enterprise data architecture at Envalior. Driving enterprise - wide impact ensuring scalability and reliability of systems, reporting, and AI initiatives.