ML Engineer, Large Language Models (LLM Training & Inference Optimization)
Amsterdam, Netherlands; London, United Kingdom; Remote - Europe
Nebius
Discover the most efficient way to build, tune and run your AI models and applications on top-notch NVIDIA® GPUs.Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The role
We are an AI R&D team focused on applied research and the development of AI-heavy products. Examples of applied research that we recently published include:
- investigating how test-time guided search can be used to build more powerful agents [learn more]
- dramatically scaling task data collection to power reinforcement learning for SWE agents [learn more].
- maximizing efficiency of LLM training on agentic trajectories [learn more]
One example of an AI product that we are deeply involved in is Nebius AI Studio — an inference Fast, affordable AI inference at scale by Nebius AI Studio) and fine-tuning platform (Fine-tune AI models at scale) for AI models.
We are currently in search of senior and staff-level ML engineers to work on optimizing training and inference performance in a large-scale multi-GPU multi-node setups.
This role will require expertise in distributed systems and high-performance computing to build, optimize, and maintain robust pipelines for training and inference.
Your responsibilities will include:
- Architect and implement distributed training and inference pipelines leveraging techniques such as data, tensor, context, expert (MoE) and pipeline parallelism.
- Implement various inference optimization techniques - speculative decoding and its extensions (Medusa, EAGLE, etc.), CUDA-graphs, compile-based optimization.
- Implement custom CUDA/Triton kernels for performance-critical layers.
We expect you to have:
- A profound understanding of theoretical foundations of machine learning
- Deep understanding of performance aspects of large neural networks training and inference (data/tensor/context/expert parallelism, offloading, custom kernels, hardware features, attention optimizations, dynamic batching etc.)
- Expertise in at least one of those fields:
- Implementing custom efficient GPU kernels in CUDA and/or Triton
- Training large models on multiple nodes and implementing various parallelism techniques
- Inference optimization techniques - disaggregated prefill/decode, paged attention, continuous batching, speculative decoding, etc.
- Strong software engineering skills (we mostly use python)
- Deep experience with modern deep learning frameworks (we use JAX & PyTorch)
- Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing
- Strong communication and ability to work independently
Nice to have:
- Familiarity with modern LLM inference frameworks (vLLM, SGLang, TensorRT-LLM, Dynamo)
- Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization
- Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field Master’s or PhD preferred
- Track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment
- Experience in engineering complex systems, such as large distributed data processing systems or high-load web services
- Open-source projects that showcase your engineering prowess
- Excellent command of the English language, alongside superior writing, articulation, and communication skills
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Hybrid working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: CI/CD Computer Science CUDA Deep Learning Distributed Systems Engineering FSDP GPU JAX LLMs Machine Learning Open Source PhD Pipelines Python PyTorch R R&D Reinforcement Learning Research TensorRT Testing vLLM
Perks/benefits: Career development Competitive pay Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.