Hybrid Senior Data Engineer

Posted last week

Apply now

About the role

  • Senior Data Engineer deploying EXLdata.ai in client cloud environments with strong expertise in data engineering. Collaborating with client teams for seamless data migration and operational optimization in hybrid settings.

Responsibilities

  • Deploy EXLdata.ai in client-owned AWS/Azure/GCP environments.
  • Configure networking, security, CI/CD, Kubernetes, API gateways, and identity integration.
  • Troubleshoot environment, infra, IAM, and pipeline-related issues.
  • Lead cloud-level optimizations (scaling, cost, performance tuning).
  • Build, customize, and optimize data pipelines using PySpark, SQL, Databricks, Snowflake, or native hyperscaler data services.
  • Integrate platform agents into client workflows (Data Migration, DQ, DataOps, Annotation).
  • Assist client SMEs in onboarding data sources, targets, and transformations.
  • Serve as the technical anchor for first-of-kind deployments at each client.
  • Ensure clients see measurable value from agent-driven automation (SLA reduction, pipeline acceleration, DQ uplift, migration speed).
  • Provide hands-on support across discovery, configuration, runbooks, and UAT.
  • Work with product engineering on integrating new GenAI agents into client pipelines.
  • Tailor agent behaviors, triggers, and workflows for domain-specific use cases.
  • Act as the “voice of the customer” for the EXLdata.ai product team.
  • Identify enhancements, feature gaps, and new accelerator ideas.
  • Participate in internal sprints, tooling improvements, and platform hardening.
  • Support deployments in EXL-hosted private cloud environments.
  • Serve as the first line of operational excellence for premium clients.
  • Lead operational reliability, monitoring, and support SLAs.

Requirements

  • 6–12+ years as a Senior Data Engineer, Forward Deployment Engineer, or Platform Engineer.
  • Strong hands-on experience with at least one hyperscaler (AWS or Azure or GCP).
  • Deep expertise in:
  • PySpark, SQL, Python
  • Databricks / Snowflake (one mandatory, both preferred)
  • Cloud data services (Kinesis, Glue, Redshift, Synapse, BigQuery, DataProc, etc.)
  • Kubernetes, Docker, CI/CD
  • IAM, VPC, private networking, secrets, API management
  • Demonstrated ability to work directly with client engineering teams.
  • Comfortable running design discussions, debugging sessions, and deployment workshops.
  • Strong communication skills; able to simplify technical topics for business audiences.
  • Ability to operate independently with a consulting mindset and ownership mentality.
  • Exposure to LLMs, agent tooling (LangChain, LangGraph, CrewAI, etc.), or willingness to learn fast.
  • Strong interest in how AI can automate data engineering and governance.
  • “Can-do” attitude; thrives in ambiguity.
  • Fast learner; bias for action.
  • Team player who collaborates across product, engineering, and client teams.
  • Customer-first orientation and passion for delivering measurable outcomes.

Benefits

  • Health insurance
  • 401(k) matching
  • Flexible work hours
  • Professional development opportunities
  • Remote work options

Job title

Senior Data Engineer

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job