Data Engineer optimizing data pipeline architecture for NISC, enhancing data flow and collaboration across teams. Focused on leveraging Databricks technologies for efficient data handling.
Responsibilities
Assemble large, complex data sets that meet functional / non-functional business requirements.
Understanding of Data Warehouse and Data Lakehouse paradigms.
Design and build optimal data pipelines from a wide variety of data sources using AWS and Databricks technologies.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing a unified data stream.
Work with other data engineering experts to strive for greater functionality while making data more discoverable, addressable, trustworthy, and secure.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create and maintain a culture of engagement and one that is conducive of NISC’s Statement of Shared Values.
Commitment to NISC’s Statement of Shared Values.
Requirements
Experience building and optimizing data pipelines, architectures, and data sets.
Hands-on experience developing and optimizing data pipelines and workflows using Databricks.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build ETL processes supporting data transformation, data structures, metadata, dependency, and workload management.
Working knowledge of message queuing, stream processing, and highly scalable data stores.
Experience supporting and working with cross-functional teams in a dynamic environment.
Candidate with experience in a Data Engineer role, who has attained a BS or MS degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: • Experience with AWS: Lambda, S3, SQS, SNS, CloudWatch, etc.
Experience with Databricks and Delta Lake.
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Oracle, Postgres Cassandra, and DynamoDb.
Experience with data pipeline and workflow management tools: Hevo Data, Airflow, etc.
Experience with AWS cloud services: EC2, Databricks, EMR.
Experience with stream-processing systems: Apache Spark, Kafka Streams, Spring Cloud, etc.
Experience with object-oriented languages: Java, Scala.
Nice-to-have: Experience with scripting languages: Python, JavaScript, Bash, etc.
Strong verbal and written communication skills.
Ability to demonstrate composure and think analytically in high pressure situations.
Benefits
Medical, Dental and Vision Insurance.
Health Savings Account (HSA) with $100 monthly contributions from NISC.
Like to walk? Improve your overall wellness knowledge? Ability to earn up to $800 additional dollars into your HSA each year through our Wellness Rewards program.
Dependent Care Flexible Spending Account (FSA) thru Paylocity.
Fully covered life insurance up to x3 annual base salary.
Fully covered short- and long-term disability.
401(k), traditional or Roth, with employee match up to 6% and employer 4% salary base contributions.
PTO accrual levels dependent on years of service, 120 Life Leave Event hours, 9 paid holidays and an annual holiday week.
$2,500 Interest-FREE technology loan program.
$25,000 employee educational assistance program.
Volunteer, Wellness, Family Events and other employee fun supplied by our committees.
Employee Assistance Program; assisting employees and dependents with virtually any life event.
Benevolence Committee to support employees with financial hardships like unexpected medical bills, funerals and other unfortunate hardships.
Senior Data Engineer designing and building data warehouse solutions with Snowflake for a fintech company. Collaborating with cross - functional teams to facilitate data insights and analytics.
Data Engineer developing and maintaining data pipelines and applications at EvidenceCare. Collaborating across teams to generate actionable insights from healthcare data for better decision - making.
Data Engineer managing and expanding enterprise business intelligence and data platform. Focusing on Tableau development and administration with a strong engineering background.
Lead Data Engineer overseeing engineers and advancing the data platform at American Family Insurance. Creating tools and infrastructure to empower teams across the company.
Data Architect designing end - to - end Snowflake data solutions and collaborating with technical stakeholders at Emerson. Supporting the realization of Data and Digitalization Strategy.
Manager of Data Engineering leading data assets and infrastructure initiatives at CLA. Collaborating with teams to enforce data quality standards and drive integration efforts.
Data Engineer building modern Data Lake architecture on AWS and implementing scalable ETL/ELT pipelines. Collaborating across teams for analytics and reporting on gaming platforms.
Chief Data Engineer leading Scania’s Commercial Data Engineering team for growing sustainable transport solutions. Focused on data products and pipelines for BI, analytics, and AI.
Entry - Level Data Engineer at GM, focusing on building large scale data platforms in cloud environments. Collaborating with data engineers and scientists while migrating systems to cloud solutions.