Hybrid Machine Learning Tech Lead Engineer, Multimodal Models – LLM/VLM

Posted 2 weeks ago

Apply now

About the role

  • Lead dataset curation strategy - designing scalable pipelines for multimodal data to drive high-quality training and validation.
  • Architect and optimize LLMs, VLMs, and VLA models - transforming scene understanding into reliable driving guidance.
  • Own the validation strategy - defining methodologies, metrics, and failure-analysis workflows for robust vision-language-action alignment.
  • Make an immediate impact by applying deep expertise to guide model design, methodology, and development priorities from day one.
  • Collaborate closely with AV planners, perception teams, and infrastructure engineers to ensure seamless deployment in a real-time ecosystem.
  • Influence the technical direction of language-driven autonomy - contributing experience, shaping capabilities, and driving innovation to production.

Requirements

  • M.Sc. in Deep Learning, Computer Vision, NLP, or a related field (Ph.D. an advantage).
  • Proven experience with large multimodal or language models (LLMs/VLMs/VLA models) and their real-world integration.
  • At least 7 years of hands-on experience in developing deep learning models.
  • Strong programming skills in Python (additional C++ is an advantage).
  • Experience with modern DL frameworks (e.g., PyTorch, TensorFlow).

Job title

Machine Learning Tech Lead Engineer, Multimodal Models – LLM/VLM

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Postgraduate Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job