Intern in AI Engineering focused on LLMs and automation in a tech-driven team. Working on innovative AI projects and contributing to the development of automated systems.
Responsibilities
You work on internal AI projects focusing on LLMs and RAG — from prototype to production-ready solutions
You support the development and enhancement of automated content and knowledge systems
You develop and optimize data pipelines and integrate external APIs
You design and implement AI workflows to orchestrate data and model processes — ideally using tools like n8n
You structure, process, and prepare internal and external data sources
You implement and test prompting strategies, retrieval logic, and RAG workflows — experience with frameworks such as LangChain, LangGraph, or LlamaIndex is a plus
You integrate AI solutions into existing systems and, ideally, use container technologies such as Docker
You analyze results and continuously improve quality, efficiency, and the level of automation
Requirements
You are enrolled in a degree program in Computer Science, Data Science, Business Informatics, or a comparable technical field
You have strong Python skills and understand data structures, APIs, and software architecture
You have practical experience with LLMs (e.g., personal projects, thesis work, or prior working-student roles)
You are familiar with prompt engineering, API-based LLM integrations, or initial RAG implementations
You have a basic understanding of embeddings, retrieval concepts, and vector databases
Ideally, you have knowledge of databases (e.g., SQL/PostgreSQL) — experience with Docker is a plus
You work in a structured, analytical way and have an interest in production-ready AI systems within a startup environment
You have very good German skills (at least C1) and good English skills.
Benefits
A role tailored to you that fits well with your university schedule
Attractive and fair compensation
Work on innovative AI solutions with direct applications in industry and automation
A dynamic, technology-driven team with plenty of room for initiative
Access to modern tools and high-performance computing infrastructure
Flexible working hours
Hybrid work
30 days of vacation (pro-rated) and fair compensation
State-of-the-art office on the technology campus in Munich
Job title
Working Student – AI Engineering, LLMs, RAG, Automation
Develop AI solutions utilizing language models at Grupo Iter for enhancing products and decision - making. Collaborate across teams to integrate AI technologies effectively.
Junior AI Engineer developing agentic systems for AI fintech solutions in healthcare. Collaborating in agile team to create impactful and innovative AI applications.
AI Engineer at Nightfall developing AI systems to prevent data leaks for leading organizations. Collaborating with engineers to enhance AI models and drive operational excellence in data protection.
AI Engineer role in the gaming industry focusing on building and deploying generative AI solutions. Collaborate with data, IT, and business teams to integrate AI capabilities.
Founding AI Engineer at Skiffra designing AI native orchestration frameworks leveraging LLMs and real - time data. Collaborating with founders to build intelligent workflows in mining and natural resources.
Lead Applied AI Engineer developing and deploying advanced AI systems to improve healthcare experiences at Humana. Collaborate with teams to integrate AI capabilities into secure healthcare platforms.
Senior AI Engineer on Data Ingestion & AI team at Dovetail, transforming data into actionable insights. Collaborate with engineering to enhance AI features and mentor other engineers.
AI Engineer responsible for training vision - language models for AI Safety company in Paris. Extending capabilities to video and optimizing inference for production environments.
AI Engineer specializing in training audio and multimodal models for AI safety. Work at White Circle, focusing on production - ready solutions for AI systems.
AI Developer at Wealth Management designing and deploying generative AI models into production. Collaborating with teams and optimizing model performance in cloud - native architectures.