Senior/Staff Software Engineer (CUDA Expert)

USA, Durham

Nubank

Você finalmente no controle do seu dinheiro. Controle total do cartão de crédito e da conta 100% digital

View all jobs at Nubank

Apply now Apply later

About Nu

Nu is the world’s largest digital banking platform outside of Asia, serving over 105 million customers across Brazil, Mexico, and Colombia. The company has been leading an industry transformation by leveraging data and proprietary technology to develop innovative products and services. Guided by its mission to fight complexity and empower people, Nu caters to customers’ complete financial journey, promoting financial access and advancement with responsible lending and transparency. The company is powered by an efficient and scalable business model that combines low cost to serve with growing returns. Nu’s impact has been recognized in multiple awards, including Time 100 Companies, Fast Company’s Most Innovative Companies, and Forbes World’s Best Banks. Learn more: https://international.nubank.com.br/careers/

 

About the role

At Nubank, one of our engineering principles is "Leverage Through Platforms". We believe that platforms are a very efficient way of solving complex concerns that are needed for different products and teams.
The AI Infrastructure Squad within the AI Core BU builds and scales the foundational cloud, data, and AI infrastructure that powers machine learning workloads across the organization. We focus on performance, reliability, and scalability in AI systems - working on everything from training infrastructure to low-latency inference.


As a Software Engineer in the AI Core BU, we expect you to demonstrate:

  • Deep experience with GPU programming (CUDA, Triton, or OpenCL), with a focus on performance optimization for deep learning workloads.
  • Strong understanding of large language model architectures (e.g., Transformer variants) and experience profiling and tuning their performance.
  • Familiarity with memory management, kernel fusion, quantization, tensor parallelism, and GPU-accelerated inference.
  • Experience with PyTorch internals or custom kernel development for AI workloads.
  • Hands-on knowledge of low-level optimizations in training and inference pipelines, such as FlashAttention, fused ops, and mixed-precision computation.
  • Proficiency in Python and C++
  • Familiarity with inference acceleration frameworks like TensorRT, DeepSpeed, vLLM, or ONNX Runtime.
Project Experience:
  • Demonstrated experience profiling and debugging GPU performance bottlenecks in LLM training or inference pipelines.
  • Has optimized large-scale ML workloads for throughput, latency, or cost—especially in production or research environments.
  • Experience contributing to or implementing custom GPU kernels for high-impact components (e.g., attention, normalization, or activation layers).
  • Proven ability to work across research and engineering teams to bridge model design and system performance.
  • Has designed infrastructure that scales across hundreds or thousands of GPUs in cloud or on-prem clusters.

 

We’re looking for individuals who are passionate about pushing the boundaries of LLM inference and training performance. In this role, you’ll work in a fast-paced environment, helping to design and scale cutting-edge AI infrastructure. You’ll think like an owner, balancing engineering rigor with practical constraints to deliver impactful systems that support our most ambitious AI workloads.

You’ll collaborate closely with other engineers, share performance learnings across the team, and mentor others as we continuously evolve our platform. We value curiosity and a self-driven mindset — you’ll be encouraged to stay up to date with the latest in AI performance research, GPU architecture advancements, and open-source tooling.

 

What we have to offer

  • High-Impact, Cross-Functional Work – Collaborate with researchers, ML engineers, and infrastructure teams to design systems that support training and inference for the company’s most critical AI models.
  • Cutting-Edge GPU & LLM Optimization – Tackle core performance challenges in LLM serving and training. Dive deep into GPU internals, custom kernels, and distributed execution.
  • Greenfield & Production-Scale Systems – Build both new foundational components (e.g., custom ops, inference runtimes) and improve large-scale infrastructure already powering production AI workloads.
  • Ownership & Growth – Influence architecture, mentor others, and lead technical initiatives with autonomy and visibility.
  • Engineering-Driven Culture – Work in a team that values deep technical work, collaboration, and pragmatic innovation at the edge of AI systems performance.


Our Benefits

  • Remote work, with quarterly trips to Sao Paulo to build relationships with coworkers. 
  • Top Tier Medical Insurance
  • Top Tier Dental and Vision Insurance
  • 20 days time off, 14 company holidays, and great culture that emphasizes work life balance. 
  • Life Insurance and AD&D
  • Extended maternity and paternity leaves 
  • Nucleo - Our learning platform of courses
  • NuLanguage - Our language learning program
  • NuCare - Our mental health and wellness assistance program
  • Extended maternity and paternity leaves 
  • 401K
  • Saving Plans - Health Saving Account and Flexible Spending Account
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Architecture Banking CUDA Deep Learning Engineering GPU LLMs Machine Learning ML infrastructure Model design ONNX Open Source Pipelines Python PyTorch Research TensorRT vLLM

Perks/benefits: Career development Flex hours Flexible spending account Flex vacation Health care Parental leave Startup environment Transparency Wellness

Region: North America
Country: United States

More jobs like this