Distributed Training & Inference Optimization Engineer (LLM) - GPU Optimization Department (GPUOD)
Rakuten Crimson House, Japan
Rakuten
楽天グループ株式会社のコーポレートサイトです。企業情報や投資家情報、プレスリリース、サステナビリティ情報、採用情報などを掲載しています。楽天グループは、イノベーションを通じて、人々と社会をエンパワーメントすることを目指しています。Job Description:
Business Overview
AI & Data Division (AIDD) spearheads data science & AI initiatives by leveraging data from Rakuten Group. We build a platform for large-scale field experimentations using cutting-edge technologies to provide critical insights that enable faster and better and faster contribution for our business. Our division boasts an international culture created by talented employees from around the world. Following the strategic vision “Rakuten as a data-driven membership company”, AIDD is expanding its data & AI related activities across multiple Rakuten Group companies.
Department Overview
GPU Optimization Department is responsible for the strategic management, optimization, and governance of Rakuten's company-wide AI infrastructure, ensuring high-performance, cost-efficient utilization of compute resources for machine learning workloads. We oversee a large-scale hybrid infrastructure spanning thousands of accelerators, including the latest Hopper and upcoming Blackwell architectures.
As a central enabler for AI innovation, we:
- Optimize compute resource allocation across on-premises and multi-cloud environments, maximizing efficiency for training and inference workloads
- Manage hybrid orchestration of diverse accelerator resources, ensuring seamless scalability and cost-effective deployment
- Develop and enhance frameworks for large-scale distributed training, with special focus on LLMs and generative AI
- Optimize inference performance through model optimization techniques and system-level acceleration
- Collaborate with internal teams to deliver scalable, high-availability inference services tailored to business needs
- Continuously evaluate next-generation hardware solutions, including specialized AI chips optimized for LLM workloads
- By effectively managing both conventional and specialized compute resources across on-premises and cloud environments, our team ensures Rakuten's AI ecosystem remains at the forefront of performance, reliability, and cost-efficiency.
Position:
Why We Hire
- Work on cutting-edge LLM training & inference optimization at scale.
- Directly impact Rakuten’s AI infrastructure by improving efficiency and reducing costs.
- Collaborate with global AI/ML teams on high-impact challenges.
- Opportunity to research and implement state-of-the-art GPU optimizations.
Position Details
As a GPU Training & Inference Optimization Engineer, you will focus on maximizing the performance, efficiency, and scalability of LLM training and inference workloads on Rakuten’s GPU clusters. You will deeply optimize training frameworks (e.g., PyTorch, DeepSpeed, FSDP) and inference engines (e.g., vLLM, TensorRT-LLM, Triton, SGLang), ensuring Rakuten’s AI models run at peak efficiency.
This role requires strong expertise in GPU-accelerated ML frameworks, distributed training, and inference optimization, with a focus on reducing training time, improving GPU utilization, and minimizing inference latency.
Key Responsibilities
- Optimize LLM training frameworks (e.g., PyTorch, DeepSpeed, Megatron-LM, FSDP) to maximize GPU utilization and reduce training time.
- Profile and optimize distributed training bottlenecks (e.g., NCCL issues, CUDA kernel efficiency, communication overhead).
- Implement and tune inference optimizations (e.g., quantization, dynamic batching, KV caching) for low-latency, high-throughput LLM serving (vLLM, TensorRT-LLM, Triton, SGLang).
- Collaborate with infrastructure teams to improve GPU cluster scheduling, resource allocation, and fault tolerance for large-scale training jobs.
- Develop benchmarking tools to measure and improve training throughput, memory efficiency, and inference latency.
- Research and apply cutting-edge techniques (e.g., mixture-of-experts, speculative decoding) to optimize LLM performance.
Mandatory Qualifications:
- 3+ years of hands-on experience in GPU-accelerated ML training & inference optimization, preferably for LLMs or large-scale deep learning models.
- Deep expertise in PyTorch, DeepSpeed, FSDP, or Megatron-LM, with experience in distributed training optimizations.
- Strong knowledge of LLM inference optimizations (e.g., quantization, pruning, KV caching, continuous batching).
- Bachelor’s or higher degree in Computer Science, Engineering, or related field.
Desired Qualifications:
- Proficiency in CUDA, Triton kernel, NVIDIA tools (Nsight, NCCL), and performance profiling (e.g., PyTorch Profiler, TensorBoard).
- Experience with LLM-specific optimizations (e.g., FlashAttention, PagedAttention, LoRA, speculative decoding).
- Familiarity with Kubernetes (K8s) for GPU workloads (e.g., KubeFlow, Volcano).
- Contributions to open-source ML frameworks (e.g., PyTorch, DeepSpeed, vLLM).
- Experience with inference serving frameworks (e.g., vLLM, TensorRT-LLM, Triton, Hugging Face TGI).
Languages:
English (Overall - 3 - Advanced)* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture Computer Science CUDA Deep Learning Engineering FSDP Generative AI GPU Kubeflow Kubernetes LLMs LoRA Machine Learning ML infrastructure Open Source PyTorch Research TensorRT vLLM
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.