Staff Data Engineer responsible for data strategy and pipeline management at PayJoy. Ensuring quality data and leading engineering best practices for organizational efficiency.
Responsibilities
Architect and Build Data Pipelines: Build, optimize, and maintain reliable, scalable, and efficient data pipelines for both batch and real-time data processing.
Data Strategy: Develop and maintain a data strategy aligned with business objectives, ensuring data infrastructure supports current and future needs.
Streaming Expertise: Lead the development of real-time ingestion pipelines using Kafka/Kinesis, and design data models optimized for streaming workloads.
Data Quality & Governance: Implement data quality checks, schema evolution, lineage tracking, and compliance using tools like Unity Catalog and Delta Lake etc.
Tool & Technology Selection: Evaluate and implement the latest data engineering tools and technologies that will best serve our needs, balancing innovation with practicality.
Automation and CI/CD: Drive automation of pipeline deployments, testing and monitoring using Terraform, CircleCi or similar tools.
Performance Tuning: Regularly review, refine, and optimize SQL queries across different systems to maintain peak performance. Identify and address bottlenecks, query performance issues, and resource utilization. Setup best practices and work with developers on education of what they should be doing in the software development lifecycle to ensure optimal performance.
Database Administration: Manage and maintain production AWS RDS MySQL, Aurora and postgres databases. Perform routine database operations, including backups, restores, and disaster recovery planning. Monitor database health, diagnose and resolve issues in a timely manner.
Knowledge and Training: Serve as the primary point of contact for database performance and usage related knowledge, providing guidance, training, and expertise to other teams and stakeholders.
Monitoring & Troubleshooting: Implement monitoring solutions to ensure high availability and troubleshoot data pipeline issues in real-time.
Documentation: Maintain comprehensive documentation of systems, pipelines, and processes for easy onboarding and collaboration.
Mentorship & Leadership: Mentor other engineers, review PRs, and establish best practices in data engineering.
Requirements
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
12+ years of experience in data engineering, with at least 3+ years working in Databricks.
Deep hands-on experience with Apache Spark (PySpark/SQL), Delta Lake, and Structured Streaming.
Technical Expertise: Deep understanding of data engineering concepts, including ETL/ELT processes, data warehousing, big data technologies, and cloud platforms (e.g., AWS, Azure, GCP).
Strong proficiency in Python, SQL, and data modeling for both OLTP and OLAP systems.
Architectural Knowledge: Strong experience in designing and implementing data architectures, including real-time data processing, data lakes, and data warehouses.
Tool Proficiency: Hands-on experience with data engineering tools such as Apache Spark, Kafka, Databricks, Airflow, and modern data orchestration frameworks.
Innovation Mindset: A track record of implementing innovative solutions and reimagining data engineering practices.
Experience with Databricks Workflows, Delta Live Tables (DLT), and Unity Catalog.
Familiarity with stream processing patterns (exactly-once semantics, watermarking, checkpointing)
Benefits
100% Company-funded health, dental, and vision insurance for employee and immediate family
Company-funded employee life and disability insurance
3% employer 401k contribution
Company holidays; 20 days vacations; flexible sick leave
Headphone, home office equipment and wellness perks.
Full - Stack Data Engineer designing and optimizing complex data solutions for automotive content. Collaborating with teams to enhance user experience across MOTOR's product lines.
Principal Data Engineer designing and evolving enterprise data platform. Collaborating with analytics teams to enable AI and data products at American Tower.
BI Data Engineer II supporting scalable Lakehouse data pipelines at Boston Beer Company. Collaborating with stakeholders to drive data ingestion and maintain enterprise data quality.
Senior Data Engineer at A Kube Inc responsible for building and maintaining data pipelines for product performance. Collaborating with product, engineering, and analytics teams to ensure data quality and efficiency.
Data Engineer engineering DUAL Personal Lines’ strategic data platforms for global insurance group. Providing technical expertise in data engineering and collaborating with internal teams for solution delivery.
Data Engineer role focused on creating and monitoring data pipelines in an innovative energy company. Collaborate with IT and departments to ensure quality data availability in a hybrid work environment.
SQL Migration Data Engineer at Auxo Solutions focusing on Azure SQL/Fabric Lakehouse migrations and building data pipelines. Collaborating on technical designs and data governance for modernization initiatives.
Data Engineer developing cloud solutions and software tools on Microsoft Azure big data platform. Collaborating with various teams for data analysis and visualization in healthcare.