Hybrid Senior AI Researcher, Multimodal Perception Models

Posted last month

Apply now

About the role

  • Lead research on Foundational Multimodal Models for Conversational Avatars — systems that can perceive, reason, and generate across video, audio, and language.
  • Build and train models using Autoregressive, Predictive (e.g., V-JEPA), and Diffusion-based architectures with a deep focus on temporal and sequential data (not static frames).
  • Design and execute experiments to predict and control the visual, auditory, and linguistic responses of avatars.
  • Partner with the Applied ML team to bring research into real-world use cases.
  • Mentor other researchers and drive excellence across the team.

Requirements

  • A PhD plus 2–3+ years working hands-on with LLMs, VLMs, or multimodal systems.
  • Previous experience leading research efforts or mentoring teams.
  • Expertise in sequence modeling across video, audio, and text — with strong understanding of autoregressive, predictive, and diffusion frameworks.
  • Experience with large-scale model training and optimization for performance and real-time generation.
  • Proven ability to translate research ideas into production-grade systems.
  • Publications in top-tier venues (CVPR, ICCV, NeurIPS, ECCV, ACMMM).
  • Strong PyTorch skills and comfort moving fluidly between research and engineering.

Benefits

  • flexible work schedule
  • unlimited PTO
  • competitive healthcare
  • gear stipends

Job title

Senior AI Researcher, Multimodal Perception Models

Job type

Experience level

Senior

Salary

Not specified

Degree requirement

Postgraduate Degree

Tech skills

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job