Hybrid Senior Machine Learning Engineer, Multimodal Models – LLM/VLM

Posted 2 weeks ago

Apply now

About the role

  • Own dataset curation activities- acquiring, cleaning, labeling, and tailoring multimodal data to meet model training and validation requirements.
  • Train and fine-tune LLMs, VLMs, and VLA models to interpret visual scenes and produce actionable navigation insights supporting autonomous vehicle decision-making.
  • Lead robust validation of advanced multimodal models - ensuring reliable vision-language-action alignment and consistent performance across diverse real-world driving scenarios.
  • Collaborate closely with AV planners, perception teams, and infrastructure engineers to ensure seamless deployment in a real-time ecosystem.
  • You’ll have the opportunity to influence the strategic direction of language-driven autonomy - proposing new ideas, shaping model capabilities, and driving innovation from research to real-world deployment.

Requirements

  • M.Sc. in Deep Learning, Computer Vision, NLP, or a related field (Ph.D. an advantage).
  • At least 5 years of hands-on experience in developing deep learning models.
  • Strong programming skills in Python (additional C++ is an advantage).
  • Experience with modern DL frameworks (e.g., PyTorch, TensorFlow).
  • Experience with large multimodal or language models (LLMs/VLMs/VLA models) and their real-world integration - advantage.

Job title

Senior Machine Learning Engineer, Multimodal Models – LLM/VLM

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Postgraduate Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job