Data Engineer II designing and building data pipelines for analytics at Honeywell. Collaborating with data scientists and product owners in a hybrid work setting in Charlotte, NC.
Responsibilities
Design & build pipelines to ingest, transform, and publish structured/unstructured data from SFDC, EDW, ADLS, Event Hub, and APIs into Databricks/Snowflake, following Delta Lake and Unity Catalog standards
Model data (star/snowflake, CDC, SCD, dimensional views) to support analytics (e.g., commercial pipeline metrics, quote/discount modeling)
Operationalize ML/analytics pipelines including bronze→silver→gold processing, joins with model/market indicators, and serving outputs to applications/APIs
Harden platforms: CI/CD with Azure DevOps; monitor jobs/clusters; optimize PySpark/SQL performance; enforce data governance (quality, privacy, lineage, access)
Partner & document: collaborate with product owners and data science; write runbooks and technical specs; contribute to weekly updates and stewardship forums
Requirements
Min 4 years of experience in data engineering, ETL, or database development/administration
Hands‑on Azure Databricks, CI/CD & DevOps, and Snowflake experience
Strong Python, SQL, PySpark; comfort with both structured and unstructured data
Experience with Agile delivery
Bachelor’s degree in a technical discipline such as science, technology, engineering, mathematics
Experience with at least one NoSQL store (e.g., HBase/Cassandra/MongoDB)
Familiarity with Hadoop ecosystem (HDFS, Spark), and data integration/ETL tools
Exposure to ML ops tooling (MLflow), AKS‑backed API services, and integration patterns between Databricks, Snowflake, and application layers
Demonstrated contributions to data quality/stewardship initiatives (lineage, metadata, GDM frameworks)
Clear communication and ability to present technical trade‑offs to stakeholders
Working knowledge of SFDC data model and commercial processes (opportunities, quotes, quote line items)
Benefits
Comprehensive benefits package including employer-subsidized Medical, Dental, Vision, and Life Insurance
Short-Term and Long-Term Disability
401(k) match
Flexible Spending Accounts
Health Savings Accounts
EAP
Educational Assistance
Parental Leave
Paid Time Off (for vacation, personal business, sick time, and parental leave)
Intermediate Data Engineer designing and building data pipelines for travel industry data management. Collaborating across teams to ensure reliable data for analytics and reporting.
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.
IT Data Engineering Co‑Op at BlueRock Therapeutics supports development of scientific data systems. Collaboration on data workflows and foundational AWS data engineering tasks.
Data Engineer I building and operationalizing complex data solutions for Travelers' analytics using Databricks. Collaborating within teams to educate end users and support data governance.
Data Engineer shaping modern data architecture to drive golf’s digital transformation. Collaborating with teams to enhance data pipelines and insights for customer engagement and revenue growth.
Staff Data Engineer overseeing complex data systems for CITY Furniture. Responsible for architecting and optimizing data ecosystems in a hybrid work environment.