Data Engineer responsible for driving technical implementation of data products on Azure Databricks. Collaborates with teams in Midland, MI or Houston, TX for critical data initiatives.
Responsibilities
Collaborate with senior data engineers to translate complex business requirements and ambiguous problem statements into clear, robust, and scalable technical designs and data models (e.g., dimensional modeling, star schemas), and independently drive the implementation of these designs.
Design, build, and deploy high-volume data transformation logic using highly optimized PySpark.
Contribute significantly to the design and improvement of CI/CD pipelines in Azure DevOps/Git, ensuring reliable, automated, and secure deployment of data solutions across environments.
Deeply understand and connect to various source systems, demonstrating proficiency in managing data persistence and query performance across diverse technologies like SQL Server, Neo4j, and CosmosDB.
Proactively implement and maintain advanced data quality frameworks (e.g., Delta Live Tables, Great Expectations) and monitoring solutions to ensure data reliability for mission-critical applications.
Serve as a go-to technical resource for peers, conducting technical code reviews and informally mentoring Associate Data Engineers on PySpark and Databricks best practices.
Requirements
A minimum of a bachelor’s degree or relevant military experience at or above a U.S. E5 ranking or Canadian Petty Officer 2nd Class or Sergeant OR 5 years relevant experience in lieu of a Bachelor’s degree.
Minimum of 2 years of professional experience in Data Engineering, Software Engineering, or a closely related field.
A minimum requirement for this U.S. based position is the ability to work legally in the United States.
No visa sponsorship/support is available for this position, including for any type of U.S. permanent residency (green card) process.
Proven ability to write highly optimized, production-grade PySpark/Spark code.
Experience identifying and resolving performance bottlenecks in a distributed computing environment.
Practical experience designing and implementing analytical data models (e.g., dimensional modeling, star/snowflake schemas) and handling Slowly Changing Dimensions (SCDs).
Expertise in using Azure Data Factory (ADF), Databricks Workflows, or equivalent tools (e.g., Airflow) for complex dependency management, error handling, and end-to-end pipeline orchestration.
Demonstrated experience with advanced SQL and hands-on experience querying and integrating data from at least one non-relational or Graph database (e.g., CosmosDB, Neo4j).
Benefits
Equitable and market-competitive base pay and bonus opportunity across our global markets, along with locally relevant incentives.
Benefits and programs to support your physical, mental, financial, and social well-being, to help you get the care you need...when you need it.
Competitive retirement program that may include company-provided benefits, savings opportunities, financial planning, and educational resources to help you achieve your long term financial-goals.
Employee stock purchase programs (availability varies depending on location).
Student Debt Retirement Savings Match Program (U.S. only).
Robust medical and life insurance packages that offer a variety of coverage options to meet your individual needs.
Opportunities to learn and grow through training and mentoring, work experiences, community involvement and team building.
Competitive yearly vacation allowance.
Paid time off for new parents (birthing and non-birthing, including adoptive and foster parents).
Paid time off to care for family members who are sick or injured.
Paid time off to support volunteering and Employee Resource Group’s (ERG) participation.
Wellbeing Portal for all Dow employees, our one-stop shop to promote wellbeing, empowering employees to take ownership of their entire wellbeing journey.
On-site fitness facilities to help stay healthy and active (availability varies depending on location).
Senior Data Engineer supporting AI - enabled financial compliance initiative with data pipelines and ingestion processes. Collaborating with diverse teams in a mission - critical regulated environment.
Data Architect leading the definition and construction of cloud data architecture for Kyndryl. Participating in significant technological modernization initiatives, focusing on Google Cloud Platform.
Senior Data Engineer driving data intelligence requirements and scalable data solutions for a global consulting firm. Collaborating across functions to enhance Microsoft architecture and analytics capabilities.
Experienced AI Engineer designing and building production - grade agentic AI systems using generative AI and large language models. Collaborating with data engineers, data scientists in a tech - driven company.
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.