Design and implement LLM‑powered services and agentic workflows (context engineering, memory, tool use/orchestration) for real customer and operations use cases.
Build and maintain evaluation pipelines (datasets, leaderboards, automated + human evals) to ensure quality, safety, and regression control.
Productionize models and model APIs on cloud infrastructure (containerization, CI/CD, feature flags, canary releases, observability, incident response).
Optimize inference for latency, throughput, and cost (batching, caching, prompt/program optimization; use of suitable inference servers).
Implement retrieval and memory systems (vector search, session/long‑term memory) that improve agent reliability over time.
Create data flywheels: instrument applications, collect feedback, curate datasets, run targeted fine‑tunes/adaptations, and close the loop with evals.
Harden systems for security, privacy, and compliance (PII handling, guardrails, content filtering, auditability, documentation).
Improve developer ergonomics (internal tooling, SDKs, prompt libraries, testing harnesses) to speed up safe experimentation.
Collaborate cross‑functionally with product, design, and operations; participate in backlog refinement and share learnings across the stack.
Requirements
Bachelor’s or Master’s in Computer Science/Engineering or equivalent practical experience.
6+ years in software/ML engineering delivering production systems with at least +2 years of experience working in complex tech infrastructure.
Strong Python and OOP; you write clean, testable, maintainable code.
Experience building backend services/APIs, containers (e.g., Docker), and CI/CD.
Hands-on integrating hosted and/or open-source LLMs via SDKs or APIs into applications.
Practical understanding of ML fundamentals (model lifecycle, metrics, failure modes) and experience adapting models or prompts to meet product goals.
Experience with cloud infrastructure on at least one major provider (AWS/Azure/GCP). AWS experience is a plus but not required.
Familiarity with retrieval/vector search and evaluation design (offline tests, task design, human-in-the-loop).
Comfort measuring reliability and performance using any modern observability stack (metrics, logging, tracing).
Benefits
The chance to build production generative AI at global scale across diverse businesses.
A collaborative, supportive environment where innovation and initiative are encouraged.
Modern tooling, mentorship, and clear growth paths across the Prosus portfolio.
A diverse, inclusive workplace where all voices are heard and valued.
AI Engineering intern at IDEMIA working on algorithms for identity technologies. Involves collaboration with diverse teams and focuses on innovation in biometrics and cryptography.
Lead AI Architect responsible for strategy and execution of AI systems at Aleph. Shaping future digital advertising through innovative AI - powered products.
Join Euromonitor International as an AI Engineer I, contributing to data science in a brand new AI team. Collaborate across global offices to build analytical and machine learning solutions.
AI Engineer III developing machine learning capabilities in a new team at Euromonitor International. Collaborating with cross - functional teams to deliver innovative AI solutions.
Senior Manager driving Ford's AI strategy and engineering. Leading cross - functional teams in delivering enterprise - grade AI products and services in Dearborn.
AI Engineer specializing in building AI solutions and virtual assistants. Involved in creating scalable, secure, and well - documented AI components for government or enterprise contexts.
KI - Entwickler bei Fassmer, einem international agierenden Familienunternehmen, zur Entwicklung und Integration von KI - Modellen in betriebliche Prozesse.
AI Developer designing and implementing AI - driven solutions at Luminor Group. Collaborating to enhance business performance and detect financial crime risks.