Data Platform Architect responsible for modernizing Welocalize’s data infrastructure and analytics environment. Driving technical standards while collaborating closely with engineering and governance leads.
Responsibilities
Rapidly gain fluency with the current-state data infrastructure and absorb the design and assessment materials produced by prior consulting work, translating them into actionable technical and sequencing plans.
Proactively engage business stakeholders (Finance, Operational Leadership, FP&A) to understand high-priority business intelligence, reporting, and financial reconciliation needs.
Define, model, and translate complex business requirements into concrete data pipelines and robust transformation logic within dbt, ensuring data quality, consistency, and fitness-for-purpose for Power BI consumption.
Lead the architectural design and implementation of data modeling, transformation, and orchestration standards using dbt, Redshift, Estuary, and Power BI.
Partner with the Data Engineering Leader to plan and sequence modernization workstreams and ensure technical execution aligns with architectural standards. Additional India-based engineering resources will be positioned for deployment to this initiative.
Contribute directly to core dbt model and test development, ensuring speed, architectural quality, and maintainability.
Collaborate with the Director of Data Quality & Governance to embed data quality, lineage, and governance frameworks into the platform’s design—including the enablement of automated data quality testing, proactive monitoring, and standardized issue resolution workflows.
Explicitly design transformation logic to reconcile financial and operational data discrepancies and establish a Single Source of Truth for key metrics and metadata elements.
Drive the migration and modernization of legacy workflows (Matillion, Power BI DAX) into version-controlled, tested, and documented dbt models.
Establish and enforce best practices for Git-based workflows, CI/CD pipelines, and documentation across the data platform. Ensure the platform’s ongoing evolution remains structured, well-documented, and responsive to business analytical needs.
Own the framework and standards for platform documentation, ensuring models, transformations, and dependencies are consistently described, discoverable, and integrated with governance processes. Leverage automation and AI-assisted tooling (e.g., dbt auto-docs, lineage visualization, metadata capture) to streamline and scale documentation, lineage tracking, and quality reporting.
Mentor and upskill engineers and analysts supporting the initiative, embedding platform standards and reducing key-person risk.
Architect the semantic model to maximize discoverability, optimize performance, reduce Power BI Premium capacity strain, and enable self-service analytics for business users.
Develop and maintain metadata documentation that clearly defines dimensions and metrics in ways that are easy for other members of the data, operations, and business teams to understand.
Requirements
Bachelor’s degree in Computer Science, Data Science, Information Systems, or a related field strongly preferred; equivalent experience acceptable. Master’s degree or advanced certification in data science, data engineering or architecture is a plus.
7–10 years of progressive experience in data engineering, data architecture, or related technical disciplines, including at least 3 years leading or designing cloud-based data platform implementations (e.g., Redshift, Snowflake, BigQuery).
Demonstrated success designing, implementing, and optimizing modern data stacks that include automated testing, documentation, and governance capabilities.
Deep expertise in data modeling and transformation using dbt (or equivalent frameworks).
Expert-level proficiency in SQL and data warehouse design principles.
Proven ability to translate abstract business requirements into scalable, production-ready data models and dbt transformation logic.
Strong proficiency in Python for automation, orchestration, and integration tasks related to data pipelines, testing, and CI/CD.
Experience with real-time and batch ingestion tools such as Estuary, Fivetran, or Airbyte.
Proficiency with Git-based development workflows, including branching, pull requests, and code reviews.
Hands-on experience implementing CI/CD pipelines for data or analytics environments.
Proven ability to lead complex technical initiatives and establish scalable engineering and documentation standards without formal managerial authority.
Experience mentoring engineers and promoting consistent development best practices.
Strong communication and collaboration skills, with the ability to engage both technical and business stakeholders.
Familiarity with AWS data ecosystem (S3, Redshift, Glue, IAM) or comparable cloud platforms.
Data Engineer II leading development and delivery of data pipelines for Syneos Health. Collaborating with teams to optimize data processing and integrate solutions into production environments.
Lead Data Engineer overseeing data operations and analytics engineering teams for OneOncology. Focused on operational excellence in data platform and model reliability for cancer care improvement.
Senior AWS Software Data Engineer at Boeing focusing on AWS Data services to support digital analytics capabilities. Collaborating with cross - functional teams to design, develop, and maintain software data solutions.
Senior Data Engineer designing and improving software for business capabilities at Barclays. Collaborating with teams to build a data and intelligence platform for Equity Derivatives.
Senior AI & Data Engineer developing and implementing AI solutions in collaboration with clients and teams. Working on projects involving generative AI, predictive analytics, and data mastery.
Consultant driving IA business growth in Deloitte's Artificial Intelligence & Data team. Delivering innovative solutions using data analytics and automation technologies.
Data Engineer responsible for managing data architecture and pipelines at Snappi, a neobank. Collaborating with teams to enable data processing and analysis in innovative banking solutions.
Data Engineer at Destinus developing the data platform to support production and analytics needs. Involves migrating Excel sources to Lakehouse and integrating ERP systems in a hybrid role.
Senior Data Engineer developing solutions within the Global Specialty portfolio at an insurance company. Engaging with diverse business partners to ensure high quality data reporting.
Data Engineer at UBDS Group focusing on designing and optimizing modern data platforms. Collaborating in a multidisciplinary team to develop reliable data assets for analytics and operational use cases.