About the role

  • Data Engineer to build and support data pipelines and models for analytics and machine learning. Working with ADI's Product Data Science team to leverage extensive data and automate processes.

Responsibilities

  • Designing and implementing data pipelines to efficiently extract, transform, and load (ETL) from various data sources into a central repository or data warehouse
  • Deploy and maintain scalable production pipelines for AI models, ensuring seamless integration with existing systems, continuous performance monitoring, and iterative improvements based on real-world feedback
  • Working on transformations and contributing to building the dimensional model of ADI product data
  • Building and maintaining scalable and robust data infrastructure, including data lakes, and distributed computing systems that are equipped to handle large volumes of data
  • Implement, and build data solutions using Spark, Python, Databricks, and the AWS ecosystem (S3, Redshift, EMR, Athena, Glue).
  • Implementing data validation and cleaning processes to ensure data quality and monitoring data pipelines for errors or anomalies
  • Optimizing data storage and retrieval processes to enhance performance and reduce latency, particularly through techniques such as indexing, partitioning, and caching
  • Implementing data security and privacy measures, including data encryption, access controls, and compliance with data governance policies and regulations
  • Collaborating with data scientists and data analysts to understand metrics and data requirements and to provide them with access to the necessary data sets and data systems
  • Adapt, learn, grow, and teach to deliver world-class products and platforms
  • Bring a curious mind to current business problems and opportunities, expanding your understanding of all products in the ADI ecosystem

Requirements

  • Strong programming skills and in Python
  • Strong expertise in database and query languages like SQL, NoSQL
  • Knowledge of data engineer skills such as data management, data visualization, and familiarity with data architecture
  • Knowledge and hands-on experience in building scalable data platforms and reliable data pipelines using technologies such as Spark, Databricks, AWS Kinesis, and/or Kafka
  • Solid understanding of MLOps principles and model versioning
  • Experience working with large volumes of metadata and schemas
  • Hands-on experience in ETL/ELT and data integration
  • Data warehousing knowledge, including data modeling, data security, and data governance understanding
  • Understanding how the role complements others working with machine learning, data science, algorithms, business intelligence

Benefits

  • Health package
  • Insurance in case of serious illness, surgical intervention, professional illness, and insurance from the consequences of an accident
  • Flexible working hours
  • English classes during working hours
  • Employee referral bonus program
  • Corporate social events and team buildings
  • Food and drinks: Free use of coffee machines, free fruit and snacks
  • Well-equipped office

Job title

Data Engineer

Job type

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job