DevOps Engineer building and maintaining cloud infrastructure and data pipelines at Pickle Robot Company. Join the team revolutionizing warehouse automation with robotic unload systems.
Responsibilities
Design cloud architecture applied using repeatable Terraform for GCP, including IAM, Artifact Registry, Cloud Run/Compute, load balancers/IAP, and Cloud DNS.
Evolve our GitHub Actions CI/CD pipelines with reusable workflows, intelligent caching, secrets/attestations, and WIF/OIDC integration to GCP.
Level-up observability using Prometheus/VictoriaMetrics, Grafana, and Alertmanager, establishing sensible SLOs and quiet, actionable alerts.
Harden the platform with least-privilege IAM, Secret Manager/Vault integration, service-to-service authentication, and software provenance tracking.
Ship slimmer, reproducible Docker images and define base-image policies with automated scanning for vulnerabilities.
Design, build, and maintain robust data pipelines to capture logs and telemetry from robots in the field, enabling real-time monitoring and analysis.
Build small internal tools and CLIs (usually Python) to eliminate toil, such as DNS checks, release helpers, and deployment automation.
Develop and maintain automation scripts to bootstrap laptops, servers, and robot controllers, streamlining deployment and configuration management.
Work cross-functionally with software engineering, hardware, and deployment teams to ensure our robots are operationally ready to serve customers with minimal downtime.
Participate in an on-call rotation to provide support for infrastructure and deployment issues.
Write crisp documentation, PR templates, and lightweight runbooks that people actually use.
Establish monitoring, alerting, and observability systems to proactively identify and resolve issues before they impact customers.
Requirements
5–7+ years in software engineering; you've shipped production systems and have a strong software foundation.
Proficiency with Linux, Git, Docker, and Python for automation and infrastructure management.
Hands-on experience with public cloud infrastructure management (GCP preferred): IAM, compute (Cloud Run or GCE), networking/load balancing, storage, PubSub, and Artifact Registry.
Strong experience creating CI/CD and build pipelines that integrate different providers securely, utilizing technologies such as IAM, service account impersonation, and Workload Identity Federation.
Security-minded with deep understanding of least privilege, secrets management, and OIDC/SAML concepts. SecOps experience is a plus.
Bias to automate and document; comfortable taking a loosely defined task to done with light guidance.
Pragmatic about trade-offs; you can explain the "why," not just the "what."Familiarity with Robot Operating System (ROS) and the unique challenges of deploying software to robotic systems.
Experience managing a fleet of robotic devices or IoT systems, including remote monitoring, updates, and troubleshooting.
Excellent troubleshooting skills with the ability to diagnose complex issues across distributed systems.
Strong interpersonal and technical communication skills, with the ability to collaborate effectively across engineering, hardware, and deployment teams in a hybrid environment with distributed teammates.
Detail-oriented problem-solver with a strong sense of urgency and willingness to help colleagues and customers in their time of need.
Self-motivated with high ownership mentality; personal responsibility with a collective mindset.
Benefits
health, dental, & vision insurance
unlimited vacation
all federal and state holidays
401K contributions of 5% your salary
travel supplies
other items to make your working life more fun, comfortable, and productive
DevOps Engineer focused on designing and managing CI/CD pipelines using Azure DevOps. Collaborating with teams for application deployment and ensuring DevSecOps practices.
DevOps Engineer working closely with engineering and security teams to optimize CI/CD pipelines and manage infrastructure. Ensuring security and compliance for mission - critical financial applications.
Build and scale cloud infrastructure that powers Heidi's healthcare AI platform. Work with AWS and Azure while enhancing automation and reliability in an innovative healthtech startup.
Infrastructure - as - Code DevOps Engineer designing and managing cloud - native platforms at Vodafone. Collaborating with agile teams for digital transformation and business success.
Director of Data Engineering leading a strategic DevOps team within Enterprise AI. Balancing leadership with hands - on expertise to enable AI technology adoption.
Join a Data Engineering Team as a Senior DevOps to support multiple Data & AI initiatives. Utilize cloud technologies and enhance data pipelines in a collaborative environment.
Principal Site Reliability Engineer at Early Warning designing performance and resiliency patterns for applications and infrastructure. Collaborating with development teams to improve systems and data integrity.
DevOps Engineer contributing to CI/CD setup and Azure services management. Collaborates with teams to ensure efficient project delivery in a hybrid environment.
IT DevOps Specialist at BMW responsible for analyzing requirements and implementing software solutions in AWS cloud environments. Collaborating internationally within agile teams for digital transformation projects.
DevOps Engineer at Vistra designing, implementing, and maintaining robust CI/CD pipelines and cloud infrastructure. Enabling software delivery across multiple technology stacks with a focus on AWS.