Distributed Training & Inference Optimization Engineer (LLM)

Crimson House Singapore

Rakuten Asia Pte Ltd

Rakuten Group, Inc. is a leading global company that contributes to society by creating value through innovation and entrepreneurship. Browse corporate information, including company overview, investor relations, sustainability and careers.

View all jobs at Rakuten Asia Pte Ltd

Apply now Apply later

Job Description:

Situated in the heart of Singapore's Central Business District, Rakuten Asia Pte. Ltd. is Rakuten's Asia Regional headquarters. Established in August 2012 as part of Rakuten's global expansion strategy, Rakuten Asia comprises various businesses that provide essential value-added services to Rakuten's global ecosystem. Through advertisement product development, product strategy, and data management, among others, Rakuten Asia is strengthening Rakuten Group's core competencies to take the lead in an increasingly digitalized world.

AI & Data Division (AIDD) spearheads data science & AI initiatives by leveraging data from Rakuten Group. We build a platform for large-scale field experimentations using cutting-edge technologies to provide critical insights that enable faster and better and faster contribution for our business. Our division boasts an international culture created by talented employees from around the world. Following the strategic vision “Rakuten as a data-driven membership company”, AIDD is expanding its data & AI related activities across multiple Rakuten Group companies.

As a GPU Training & Inference Optimization Engineer, you will focus on maximizing the performance, efficiency, and scalability of LLM training and inference workloads on Rakuten’s GPU clusters. You will deeply optimize training frameworks (e.g., PyTorch, DeepSpeed, FSDP) and inference engines (e.g., vLLM, TensorRT-LLM, Triton, SGLang), ensuring Rakuten’s AI models run at peak efficiency. This role requires strong expertise in GPU-accelerated ML frameworks, distributed training, and inference optimization, with a focus on reducing training time, improving GPU utilization, and minimizing inference latency.

Key Responsibilities

  • Optimize LLM training frameworks (e.g., PyTorch, DeepSpeed, Megatron-LM, FSDP) to maximize GPU utilization and reduce training time.
  • Profile and optimize distributed training bottlenecks (e.g., NCCL issues, CUDA kernel efficiency, communication overhead).
  • Implement and tune inference optimizations (e.g., quantization, dynamic batching, KV caching) for low-latency, high-throughput LLM serving (vLLM, TensorRT-LLM, Triton, SGLang).
  • Collaborate with infrastructure teams to improve GPU cluster scheduling, resource allocation, and fault tolerance for large-scale training jobs.
  • Develop benchmarking tools to measure and improve training throughput, memory efficiency, and inference latency.
  • Research and apply cutting-edge techniques (e.g., mixture-of-experts, speculative decoding) to optimize LLM performance.

Mandatory Qualifications

  • 3+ years of hands-on experience in GPU-accelerated ML training & inference optimization, preferably for LLMs or large-scale deep learning models.
  • Deep expertise in PyTorch, DeepSpeed, FSDP, or Megatron-LM, with experience in distributed training optimizations.
  • Strong knowledge of LLM inference optimizations (e.g., quantization, pruning, KV caching, continuous batching).
  • Bachelor’s or higher degree in Computer Science, Engineering, or related field.

Nice-to-Have Skills

  • Proficiency in CUDA, Triton kernel, NVIDIA tools (Nsight, NCCL), and performance profiling (e.g., PyTorch Profiler, TensorBoard).
  • Experience with LLM-specific optimizations (e.g., FlashAttention, PagedAttention, LoRA, speculative decoding).
  • Familiarity with Kubernetes (K8s) for GPU workloads (e.g., KubeFlow, Volcano).
  • Contributions to open-source ML frameworks (e.g., PyTorch, DeepSpeed, vLLM).
  • Experience with inference serving frameworks (e.g., vLLM, TensorRT-LLM, Triton, Hugging Face TGI).

Why Join Us?

  • Work on cutting-edge LLM training & inference optimization at scale.
  • Directly impact Rakuten’s AI infrastructure by improving efficiency and reducing costs.
  • Collaborate with global AI/ML teams on high-impact challenges.
  • Opportunity to research and implement state-of-the-art GPU optimizations.

Rakuten is an equal opportunities employer and welcomes applications regardless of sex, marital status, ethnic origin, sexual orientation, religious belief, or age.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Computer Science CUDA Data management Deep Learning Engineering FSDP GPU Kubeflow Kubernetes LLMs LoRA Machine Learning ML infrastructure Open Source PyTorch Research TensorRT vLLM

Perks/benefits: Career development

Region: Asia/Pacific
Country: Singapore

More jobs like this