We are looking for a Senior Deep Learning Engineer to help bring Cosmos World Foundation Models from research into efficient, production-grade systems. You’ll focus on optimizing and deploying models for high-performance inference on diverse GPU platforms. This role sits at the intersection of deep learning, systems, and GPU optimization - working closely with research scientists, software engineers, and hardware experts.
NVIDIA Cosmos is a platform purpose-built for physical AI, featuring powerful generative models. Developers use Cosmos to accelerate physical AI development for autonomous vehicles (AVs), robots, and video analytics AI agents by simulating and reasoning about the physical world.
What you'll be doing:
Improve inference speed for Cosmos WFMs on GPU platforms.
Effectively carry out the production deployment of Cosmos WFMs.
Profile and analyze deep learning workloads to identify and remove bottlenecks.
What we need to see:
5+ years of experience.
MSc or PhD in CS, EE, or CSEE or equivalent experience.
Strong background in Deep Learning.
Strong programming skills in Python and PyTorch.
Experience with inference optimization techniques (such as quantization) and inference optimization frameworks, one of: TensorRT, TensorRT-LLM, vLLM, SGLang.
Ways to stand out from the crowd:
Familiarity with deploying Deep Learning models in production settings (e.g., Docker, Triton Inference Server).
CUDA programming experience.
Familiarity with diffusion models.
Proven experience in analyzing, modeling, and tuning the performance of GPU workloads, both inference and training.