Senior DevOps managing data platform workloads on AWS. Collaborating on Data Mesh architecture and optimizing data pipelines in a hybrid work environment.
Responsibilities
Manage, capacity plan, and operate workloads utilizing EC2 clusters via DataBricks/EMR to ensure efficient and reliable data processing
Collaborate with stakeholders to design and implement a Data Mesh architecture across multiple closely related but separate enterprise entities
Utilize Infrastructure as Code (IaC) tools such as CloudFormation or Terraform to define and manage data platform user access to data and compute resources.
Implement role-based access control (RBAC) mechanisms using IaC templates to enforce least privilege principles and ensure secure access to data and compute resources
Collaborate with cross-functional teams to design, implement, and optimize data pipelines and workflows
Utilize distributed engines such as Spark to process and analyze large volumes of data efficiently when required
Develop and maintain operational best practices for Spark and other data warehousing tools to ensure system stability and performance
Implement and manage storage technologies to efficiently store and retrieve data as per business requirements
Troubleshoot and resolve platform-related issues in a timely manner to minimize downtime and disruptions
Stay updated on emerging technologies and industry trends to continuously enhance the data platform infrastructure
Document processes, configurations, and changes to ensure comprehensive system documentation.
Requirements
Knowledge of one or more of the following: **AWS CloudFormation and Terraform **for infrastructure provisioning
Knowledge of the source control and its related concepts (**Gitlab/Git flow, Trunk-based, branches,** etc.).
Familiarity with at least one programming language (**Python, Bash, **etc.).
Familiarity with a distributed compute engine such as** Spark**
Familiarity with a data platform or data orchestration tool such as **Databricks/Airflow**
Equipped with in-depth working knowledge and experience in using **AWS IAM, VPC, EC2, RDS, DynamoDB, DMS,** and **S3**
Experience with CI/CD tools (such as **Jenkins, TeamCity, AWS CodePipeline, CodeDeploy**) or configuration management tools (such as Ansible, Chef, Puppet..)
DevOps mindset with automation and operational excellence in mind
Good skills in English and the ability to communicate effectively with business and technical teams
Demonstrate good logical thinking and problem-solving skills
**Be curious and have a self-learning attitude**
Big Plus:
AWS Data Engineer Associate or DevOps Professional Certifications
You are:
Passionate about technology
Independent but also a team player
Comfortable with a high degree of ambiguity
Focused on usability and speed
Keen on presenting your ideas to your peers and management.
Benefits
Meal and parking allowances are covered by the company.
Full benefits and salary rank during probation.
Insurances such as Vietnamese labor law and premium health care for you and your family.
SMART goals and clear career opportunities (technical seminar, conference, and career talk) - we focus on your development.
Values-driven, international working environment, and agile culture.
Overseas travel opportunities for training and work-related.
Internal Hackathons and company events (team building, coffee run, etc.).
Pro-Rate and performance bonus.
15-day annual + 3-day sick leave per year from the company.
Sr. Site Reliability Engineer designing and automating robust technical infrastructure at Broadridge. Collaborating across teams for successful deployment and operational support of services.
Senior Fleet Reliability Engineer maintaining high fleet uptime for autonomous vehicle technology. Collaborating with technical teams to ensure peak operational performance in data collection efforts.
DevOps Lead at Leidos managing platform engineering, SRE, and application security functions. Driving operational excellence and ensuring scalability for federal government applications.
SRE Lead developing scalable cloud - native solutions for mission - critical systems supporting USAF. Managing teams, collaborating with cross - functional units, and ensuring high service reliability standards.
Junior DevOps / Platform Engineer at DieEnergiekoppler GmbH managing AWS/EKS platform operations. Collaborating with team members to improve platform functionalities and security compliance.
DevOps Engineer responsible for AWS infrastructures and backend development at Allguth GmbH. Engaging in greenfield projects with modern solutions in a collaborative team.
Cloud DevOps Specialist responsible for building scalable infrastructure solutions in AWS at SONDA. Focusing on automation, containerization, and data management in a collaborative environment.
DevOps Engineer maintaining and evolving deployment pipelines for Docebo’s AI - powered learning platform. Collaborating with cross - functional teams to ensure efficient software releases and infrastructure management.
DevOps Engineer optimizing CI/CD pipelines for Docebo, an AI - powered learning platform. Involves managing multi - tenant infrastructure using AWS, Docker, and Kubernetes.
DevOps Engineer maintaining and automating infrastructure and CI/CD processes for cybersecurity solutions by NordLayer. Collaborating with teams to ensure performance and scalability of cloud services.