AI Infra Engineer managing Kubernetes clusters and Slurm HPC environments for AI training and inference workloads. Collaborating closely with research teams to optimize performance and improve systems.
Responsibilities
Design, deploy, and maintain scalable Kubernetes clusters for AI model inference and training workloads
Manage and optimize Slurm-based HPC environments for distributed training of large language models
Develop robust APIs and orchestration systems for both training pipelines and inference services
Implement resource scheduling and job management systems across heterogeneous compute environments
Benchmark system performance, diagnose bottlenecks, and implement improvements across both training and inference infrastructure
Build monitoring, alerting, and observability solutions tailored to ML workloads running on Kubernetes and Slurm
Respond swiftly to system outages and collaborate across teams to maintain high uptime for critical training runs and inference services
Optimize cluster utilization and implement autoscaling strategies for dynamic workload demands
Requirements
Strong expertise in Kubernetes administration, including custom resource definitions, operators, and cluster management
Hands-on experience with Slurm workload management, including job scheduling, resource allocation, and cluster optimization
Experience with deploying and managing distributed training systems at scale
Deep understanding of container orchestration and distributed systems architecture
High level familiarity with LLM architecture and training processes (Multi-Head Attention, Multi/Grouped-Query, distributed training strategies)
Experience managing GPU clusters and optimizing compute resource utilization
Expert-level Kubernetes administration and YAML configuration management
Proficiency with Slurm job scheduling, resource management, and cluster configuration
Python and C++ programming with focus on systems and infrastructure automation
Hands-on experience with ML frameworks such as PyTorch in distributed training contexts
Strong understanding of networking, storage, and compute resource management for ML workloads
Experience developing APIs and managing distributed systems for both batch and real-time workloads
Solid debugging and monitoring skills with expertise in observability tools for containerized environments
Experience with Kubernetes operators and custom controllers for ML workloads
Advanced Slurm administration including multi-cluster federation and advanced scheduling policies
Familiarity with GPU cluster management and CUDA optimization
Experience with other ML frameworks like TensorFlow or distributed training libraries
Background in HPC environments, parallel computing, and high-performance networking
Knowledge of infrastructure as code (Terraform, Ansible) and GitOps practices
Experience with container registries, image optimization, and multi-stage builds for ML workloads
Demonstrated experience managing large-scale Kubernetes deployments in production environments
Proven track record with Slurm cluster administration and HPC workload management
Previous roles in SRE, DevOps, or Platform Engineering with focus on ML infrastructure
Experience supporting both long-running training jobs and high-availability inference services
Ideally, 3-5 years of relevant experience in ML systems deployment with specific focus on cluster orchestration and resource management
AI/LLM Engineer developing applications leveraging LLMs for Pulsora's sustainability platform. Seeking specialist in AI frameworks, prompt engineering, and full - stack development in hybrid environment.
Principal AI/ML Engineer developing AI/ML algorithms and leading a multidisciplined team at CACI. Focusing on large language models and applications for defense and commercial use.
Generative AI Engineer focusing on agent systems and robust backend development. Utilize Python, FastAPI, and Google Cloud for advanced AI applications and services.
Principal Generative AI Engineer leading innovative AI solutions for global projects in a consultancy. Collaborating with teams to drive generative AI initiatives and technical direction in various sectors.
Senior/Lead Gen AI/LLM Engineer working with cross - disciplinary teams to prototype AI components for city services. Responsible for coaching and guiding city teams on AI prototypes and solutions.
LLM Engineer solving real problems using LLMs and AI for various industries and products. Involves development, optimization, and team collaboration in project implementations.
Senior Generative AI Engineer responsible for developing AI demonstrators and leading projects at Alexander Thamm GmbH. Engaging in research, training, and supporting marketing efforts in Germany.
Principal Generative AI Engineer responsible for designing data architectures and Cloud solutions in Switzerland. Involves client advisory on AI strategies and participation at community events.
Senior Generative AI Engineer working on AI solutions and data platforms in Switzerland. Leading technical projects and ensuring quality in generative AI and cloud technologies implementation.
Senior Software Engineer designing machine learning infrastructure for AI - driven analytics. Collaborating with scientists on cutting - edge AI advancements and computer vision models.