Hybrid Senior Data Architect

Posted 2 weeks ago

Apply now

About the role

  • Data Platform Architect responsible for modernizing Welocalize’s data infrastructure and analytics environment. Driving technical standards while collaborating closely with engineering and governance leads.

Responsibilities

  • Rapidly gain fluency with the current-state data infrastructure and absorb the design and assessment materials produced by prior consulting work, translating them into actionable technical and sequencing plans.
  • Proactively engage business stakeholders (Finance, Operational Leadership, FP&A) to understand high-priority business intelligence, reporting, and financial reconciliation needs.
  • Define, model, and translate complex business requirements into concrete data pipelines and robust transformation logic within dbt, ensuring data quality, consistency, and fitness-for-purpose for Power BI consumption.
  • Lead the architectural design and implementation of data modeling, transformation, and orchestration standards using dbt, Redshift, Estuary, and Power BI.
  • Partner with the Data Engineering Leader to plan and sequence modernization workstreams and ensure technical execution aligns with architectural standards. Additional India-based engineering resources will be positioned for deployment to this initiative.
  • Contribute directly to core dbt model and test development, ensuring speed, architectural quality, and maintainability.
  • Collaborate with the Director of Data Quality & Governance to embed data quality, lineage, and governance frameworks into the platform’s design—including the enablement of automated data quality testing, proactive monitoring, and standardized issue resolution workflows.
  • Explicitly design transformation logic to reconcile financial and operational data discrepancies and establish a Single Source of Truth for key metrics and metadata elements.
  • Drive the migration and modernization of legacy workflows (Matillion, Power BI DAX) into version-controlled, tested, and documented dbt models.
  • Establish and enforce best practices for Git-based workflows, CI/CD pipelines, and documentation across the data platform. Ensure the platform’s ongoing evolution remains structured, well-documented, and responsive to business analytical needs.
  • Own the framework and standards for platform documentation, ensuring models, transformations, and dependencies are consistently described, discoverable, and integrated with governance processes. Leverage automation and AI-assisted tooling (e.g., dbt auto-docs, lineage visualization, metadata capture) to streamline and scale documentation, lineage tracking, and quality reporting.
  • Mentor and upskill engineers and analysts supporting the initiative, embedding platform standards and reducing key-person risk.
  • Architect the semantic model to maximize discoverability, optimize performance, reduce Power BI Premium capacity strain, and enable self-service analytics for business users.
  • Develop and maintain metadata documentation that clearly defines dimensions and metrics in ways that are easy for other members of the data, operations, and business teams to understand.

Requirements

  • Bachelor’s degree in Computer Science, Data Science, Information Systems, or a related field strongly preferred; equivalent experience acceptable. Master’s degree or advanced certification in data science, data engineering or architecture is a plus.
  • 7–10 years of progressive experience in data engineering, data architecture, or related technical disciplines, including at least 3 years leading or designing cloud-based data platform implementations (e.g., Redshift, Snowflake, BigQuery).
  • Demonstrated success designing, implementing, and optimizing modern data stacks that include automated testing, documentation, and governance capabilities.
  • Deep expertise in data modeling and transformation using dbt (or equivalent frameworks).
  • Expert-level proficiency in SQL and data warehouse design principles.
  • Proven ability to translate abstract business requirements into scalable, production-ready data models and dbt transformation logic.
  • Strong proficiency in Python for automation, orchestration, and integration tasks related to data pipelines, testing, and CI/CD.
  • Experience with real-time and batch ingestion tools such as Estuary, Fivetran, or Airbyte.
  • Proficiency with Git-based development workflows, including branching, pull requests, and code reviews.
  • Hands-on experience implementing CI/CD pipelines for data or analytics environments.
  • Proven ability to lead complex technical initiatives and establish scalable engineering and documentation standards without formal managerial authority.
  • Experience mentoring engineers and promoting consistent development best practices.
  • Strong communication and collaboration skills, with the ability to engage both technical and business stakeholders.
  • Familiarity with AWS data ecosystem (S3, Redshift, Glue, IAM) or comparable cloud platforms.

Job title

Senior Data Architect

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job