Data Engineer with AI/ML at Pulte Mortgage focused on enhancing data-driven culture and infrastructure. Responsibilities include designing data pipelines, collaborating with data scientists, and ensuring data quality.
Responsibilities
Design new and improve existing data infrastructures, including the Lakehouse, data warehouses, dataflows, data pipelines, semantic models, and reports
Migrate large-scale data stores from the existing on-premises SQL Server infrastructure to the new Microsoft Fabric-based infrastructure
Classify and organize data based on identified taxonomy structures
Work with our enterprise and data architects to ensure that the data is of high quality and meets the organization’s requirements
Optimize data processing by using modern data engineering tools such as notebooks, dataflows, data pipelines, semantic models, and reports
Provide technical expertise during the design, planning, development, implementation, and testing of digital solutions, often custom developed and integrating new technologies
Understand technological systems and strategic vision and help facilitate the technical portion to produce integrated end-to-end digital solution options
Experiment and find ways to use AI and ML to improve our processes or deliver business impact
Collaborate with data scientists to productionize ML models and integrate them into data pipelines
Participate in cross-project planning and release planning activities
Write and maintain concise documentation about our development process and major systems
Build scalable, maintainable, easy-to-use software following our development best practices and requirements laid out by the architect and the development team
Collaborate with product owners and end-users to understand any desired business functionality
Regularly review application logs and dashboards to proactively monitor for defects, gauge performance, and troubleshoot production problems
Contribute to Pulte Financial Services’ positive, trusting, inclusive culture and team-first environment
Requirements
Minimum high school diploma or equivalent (GED)
Bachelor's Degree in Computer Science or related field highly preferred
4+ years’ software engineering experience with Python, PySpark, Spark, or equivalent notebook programming
3+ years’ experience with XML, SQL, relational databases, and large data repositories
Preferred experience building solutions within Microsoft's Azure cloud environment, specifically Microsoft Fabric; or willingness to learn and adopt new cloud-native data platforms
Hands-on experience with data platform technologies such as Kafka, Hadoop, or Spark, but preferably those in the Azure platform such as HD Insight, Synapse, Data Lake, and Data Factory
Excellent relational database skills in writing SQL, ETL processes, analyzing and optimizing query plans, and writing DDL scripts
Passion for data and data quality
Passion for building clean and testable code, creating unit tests, and focusing on code quality
Extensive knowledge and experience with PowerBI or other widely used data solutions
Highly self-motivated and directed with a strong sense of curiosity and drive to accomplish goals and support the data product team
Experience with AI, ML, Agents and other automation tools is a huge plus
Experience with ML frameworks (e.g., scikit-learn, TensorFlow, Azure ML) is a plus
Experience with API and integration concepts
Knowledge in data pipelines, CI/CD concepts, DataOps/MLOps, and general software deployment lifecycles for continuous integration, delivery, and monitoring
Exceptional verbal and written communication and collaboration skills, with the ability to interact effectively with a wide range of technical and non-technical stakeholders
Participant in Agile methodologies, particularly Scrum, and a track record of successful product delivery
Benefits
Up to 9 paid company holidays per year
Up to 6 days of sick pay
Up to 17 PTO days per year (and up to 22 PTO days per year upon 10 or more years of service)
Eligible to participate in the Company’s 401(k) Plan
Medical, dental, and vision insurance coverage
Company-paid disability, basic life insurance, and parental leave
Voluntary insurance coverage options, including critical illness, accident, and hospital indemnity
Snowflake Data Engineer optimizing data pipelines using Snowflake for a global life science company. Collaborate with cross - functional teams for data solutions and performance improvements in Madrid.
Data Engineer designing and implementing big data solutions at DATAIS. Collaborating with clients to deliver actionable business insights and innovative data products in a hybrid environment.
SAP Data Engineer supporting MERKUR GROUP in becoming a data - driven company. Responsible for data integration, ETL processes, and collaboration with various departments.
Big Data Engineer designing and managing data applications on Google Cloud. Join Vodafone’s global tech team to optimize data ingestion and processing for machine learning.
Data Engineer building and maintaining data pipelines for Farfetch’s data platform. Collaborating with the Data team to improve data reliability and architecture in Porto.
Senior Data Engineer at Razer leading initiatives in data engineering and AI infrastructure. Collaborating across teams to develop robust data solutions and enhancing AI/ML projects.
Data Engineering Intern working with data as Jua builds AI for climate and geospatial datasets. Contributing to the integration and validation of new datasets with experienced mentors.
Data Engineer supporting a fintech company in building and maintaining data pipelines. Collaborating with tech teams and enhancing data processing in a high - volume environment.
Senior Data Engineer developing and optimizing data pipelines for Scene+’s cloud - native platform in Toronto. Collaborating across teams to enhance data governance and analytics capabilities.
Staff Engineer developing innovative data solutions for dentsu's B2B marketing vision. Collaborating using cutting - edge cloud technologies and mentoring engineers in their careers.