MLOps Engineer - LLMs

Netherlands - Amsterdam

Prosus

Prosus is the power behind the world's leading lifestyle e-commerce brands.

View all jobs at Prosus

Apply now Apply later

Join our AI team at Prosus, the largest consumer internet company in Europe and one of the biggest tech investors in the world. You'll be working on the team that drives growth and innovation across the company, with your work directly impacting how millions of people shop online.
Who we’re looking for
A pragmatic MLOps engineer who owns the infra, automated pipelines and high-performance API services that power our LLM and wider ML work. You think in infrastructure-as-code, automate the boring stuff, and help applied ML teams iterate fast, run lots of experiments and ship low-latency services with confidence.

What you’ll do

  • Build and maintain end-to-end ML pipelines—data ingestion, processing, distributed training, inference and evaluation.
  • Operate and optimise Kubernetes-based GPU clusters (Slurm or similar) for large-scale fine-tuning and alignment.
  • Deploy and scale open-source and custom LLMs with vLLM, TGI, etc.
  • Design production-grade async API services (e.g., using FastAPI) that add pre/post-processing, business logic and meet tight latency SLAs.
  • Apply inference tweaks—quantisation, continuous batching, PagedAttention—to squeeze every millisecond of performance.
  • Own CI/CD, experiment tracking, model versioning and observability for all ML systems.
  • Share best practices and lightweight templates so portfolio teams can spin up new pipelines in minutes.

Minimum qualifications

  • 3–5+ years in MLOps / DevOps, with a strong focus on supporting demanding ML workloads.
  • Strong Python proficiency, including solid experience building scalable APIs with frameworks like FastAPI.
  • Proven track record of running and managing production ML systems on AWS and/or GCP.
  • Hands-on expertise with modern LLM serving stacks (e.g., vLLM, KServe, TGI).
  • Solid command of Docker and Kubernetes; proficient with IaC tools (e.g., Terraform, Ansible, CloudFormation).
  • Familiarity with the MLOps toolkit for experiment tracking and HPO (e.g., MLflow, Weights & Biases, Optuna, Ray Tune).
  • Practical experience with monitoring and observability (e.g., Prometheus, Grafana, ELK) applied to ML systems, with an eye for stability, performance, and cost.
  • A proactive advocate for MLOps best practices, with experience guiding technical teams in their adoption and effective use of modern tooling.
  • Clear, empathetic communicator skilled at bridging the gap between ML research and production engineering.
  • You thrive in a fast-paced, impact-driven environment where shipping functional systems is prioritized.

Preferred qualifications

  • Orchestrate hundreds of concurrent training runs with Ray, Argo Workflows or Kubeflow, keeping lineage and reproducibility intact.
  • Define and monitor SLIs/SLOs, plus implement safe A/B or canary roll-outs for new model versions.
  • Knowledge of GPU architectures and their implications for ML inference performance.

What we offer

  • The opportunity to build and manage critical infrastructure and APIs for high-impact AI projects that are strategically vital to the company, with direct visibility and engagement from senior leadership, including the CEO.
  • Access to a state-of-the-art GPU fleet (H200s), massive proprietary datasets, and a wide range of open-source and commercial LLMs (e.g., OpenAI, Anthropic, Google, Together.ai) that will run on your platforms.
  • A team of highly experienced colleagues in AI research and engineering who will rely on your expertise to bring their models to life efficiently.
  • Significant autonomy in designing and implementing MLOps solutions and shaping our infrastructure strategy, especially for LLM serving and API development.
  • Modern tooling and infrastructure, including access to leading coding assistants (Copilot, Cursor, Devin).
  • A hybrid work model with a vibrant office in Amsterdam South (with great barista coffee!).
  • Competitive compensation, top-spec MacBook Pro and an environment genuinely built for professional growth and learning.
  • If you’re passionate about building scalable, resilient infrastructure that empowers the deployment of cutting-edge AI—and want to make a tangible difference on a global scale—let’s talk.
Our Diversity & Inclusion Commitment
We respect the dignity and human rights of individuals and communities wherever we operate in the world. Building an inclusive workplace where everyone feels welcome and can thrive is critical for us. We provide access to education, which helps everyone understand the important role they play and the positive impact they can have.
For a deeper look at our journey and future plans, explore our latest Annual Report. Stay up to date with our latest news to see what makes Prosus stand out. Learn more at www.prosus.com.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  7  0  0

Tags: Ansible Anthropic API Development APIs Architecture AWS CI/CD CloudFormation Copilot DevOps Docker ELK Engineering FastAPI GCP GPU Grafana KServe Kubeflow Kubernetes LLMs Machine Learning MLFlow MLOps OpenAI Open Source Pipelines Python Research Terraform vLLM Weights & Biases

Perks/benefits: Career development Competitive pay Gear Startup environment

Region: Europe
Country: Netherlands

More jobs like this