Data Engineer designing and building core data systems that power research at EIT. Collaborating with teams across healthcare, robotics, agriculture, and AI.
Responsibilities
**The Role:**
Our Data Engineering Team builds the core data systems that power frontier research across EIT. As an early member of our Data Engineering team, you’ll design and build the platforms used by scientists and engineers in fields such as healthcare, robotics, agriculture, and AI. You’ll work alongside our MLOps and Infrastructure teams to create reliable, scalable systems capable of handling large-scale (from TB to PB+), multimodal datasets.
EIT is unique in combining foundational data from diverse disciplines into a single research ecosystem. You’ll help develop the technical foundation that makes this possible: platforms, services, APIs and distributed systems that are robust, observable and easy to work with. This is a role for engineers who think long-term and want to build a platform that will underpin the next generation of scientific and technological discovery.
**Day-to-Day, You Might:**
Design and build distributed data systems that support research across EIT’s scientific domains.
Architect APIs and services for high-throughput, low-latency access to multimodal datasets.
Work with MLOps, Infrastructure and data engineers embedded within research teams to integrate systems into active research workflows.
Develop pipelines for large-scale text, audio, video, imaging, sensor, and structured data on OCI.
Add observability, monitoring, and automated quality checks to ensure the trustworthiness of every dataset.
Contribute to an engineering culture that values maintainability, testing, clear system design, and deep collaboration with our researchers and scientists.
Requirements
**What Makes You a Great Fit:**
You have strong programming experience in Python and SQL, and value code quality, reliability (including testing, CI/CD) and observability as much as performance.
You have experience designing, deploying, and optimising distributed data systems or data-intensive backend services.
You think in terms of systems and longevity, not just one-off ETL scripts, and embrace end-to-end ownership from low-level performance to user interfaces.
You’re a collaborative partner to Infrastructure/Ops teams and researchers; clear, respectful communicator.
You have a low-ego, team-first mindset and help grow our engineering culture by mentoring, sharing, and elevating the work of those around you.
**Great to Also Have**
**Nobody checks every box - if you’re not sure if you’re qualified, we still encourage you to apply.
You’re used to working with modern tech stacks and developing for distributed systems, for example Spark/Flink/Kafka, Polars/Arrow, Airflow/Prefect.
You’ve contributed to shared Python libraries used across multiple teams and maintained dependency and packaging standards (e.g. Poetry, pip-tools).
You have experience integrating multimodal datasets (text, video, imaging, sensor data) into unified platforms.
You’ve designed and optimised robust, high-performance APIs for data ingestion/consumption using tools such as FastAPI, gRPC, and GraphQL, and use tools such as Prometheus and OpenTelemetry to maintain SLAs.
You’re curious about database internals, storage engines, and low-latency query processing.
You’ve built web apps and dashboards using tools such as Dash or frameworks like React.
You’ve managed schema evolution, data versioning, and governance in production with tools such as Open Policy Agent and Apache Hive Metastore.
Benefits
**We offer the following salary and benefits:**
Enhanced holiday pay
Pension
Life Assurance
Income Protection
Private Medical Insurance
Hospital Cash Plan
Therapy Services
Perk Box
Electric Car Scheme
-
**Why work for EIT:**
At the Ellison Institute, we believe a collaborative, inclusive team is key to our success. We are building a supportive environment where creative risks are encouraged, and everyone feels heard. Valuing emotional intelligence, empathy, respect, and resilience, we encourage people to be curious and to have a shared commitment to excellence. Join us and make an impact!
Data Engineer managing and organizing datasets for AI models at Walaris, developing AI - driven autonomous systems for defense and security applications.
Data Engineer designing and maintaining data pipelines at Black Semiconductor. Collaborating with process, equipment, and IT teams to support manufacturing analytics and decision - making.
Junior Data Engineer role focusing on Business Intelligence and Big Data at Avanade. Collaborating on data analysis and SQL queries in a supportive learning environment.
GCP Data Engineer designing and developing data processing modules for Ki, an algorithmic insurance carrier. Working closely with multiple teams to optimize data pipelines and reporting.
Data Engineer at Securian Financial optimizing scalable data pipelines for AI and advanced analytics. Collaborating with teams to deliver secure and accessible data solutions.
IT Data Engineering Co‑Op at BlueRock Therapeutics supports development of scientific data systems. Collaboration on data workflows and foundational AWS data engineering tasks.
Data Engineer I building and operationalizing complex data solutions for Travelers' analytics using Databricks. Collaborating within teams to educate end users and support data governance.
Data Engineer shaping modern data architecture to drive golf’s digital transformation. Collaborating with teams to enhance data pipelines and insights for customer engagement and revenue growth.
Staff Data Engineer overseeing complex data systems for CITY Furniture. Responsible for architecting and optimizing data ecosystems in a hybrid work environment.
Data Engineer strengthening data platform team at Samba TV to improve data analytics and reporting capabilities. Building on AWS, Databricks, BigQuery, and Snowflake technology.