Senior Machine Learning Engineer - Hardware Abstractions & Performance Optimization

Palo Alto

Luma AI

Ideate, visualize, create videos, and share your dreams with the world, using our most powerful image and video AI models.

View all jobs at Luma AI

Apply now Apply later

Responsibilities

  • Ensure efficient implementation of models & systems with a focus on designing, maintaining, and writing abstractions that scale beyond NVIDIA/CUDA hardware.

  • Identify and remedy efficiency bottlenecks (memory, speed, utilization, communication) by profiling and implementing high-performance PyTorch code, deferring to Triton or similar kernel-level languages as necessary.

  • Benchmarking our products across a variety of hardware & software to help the product team understand the optimal tradeoffs between latency, throughput and cost at various degrees of parallelism.

  • Work together with our partners to help them identify bottlenecks and push forward new iterations of hardware and software.

  • Work closely together with the rest of the research team to ensure systems are planned to be as efficient as possible from start to finish and raise potential issues for hardware integration.

Must have experience

  • Experience optimizing for memory, latency and throughput in Pytorch.

    • Bonus: experience with non-NVIDIA systems

  • Experience using torch.compile / torch.XLA.

  • Experience benchmarking and profiling GPU & CPU code in Pytorch for optimal device utilization (examples: torch profiler, memory profilers, trace viewers, custom tooling).

  • Experience building tools & abstractions to ensure models run optimally on different hardware and software stacks .

  • Experience working with transformer models and attention implementations.

  • Experience with parallel inference, particularly with tensor parallelism, pipeline parallelism.

Good to have experience

  • Experience with high-performance Triton/CUDA and writing custom PyTorch kernels and ops. Top candidates will be able to write fused kernels for common hot paths, understand when to make use of lower level features like tensor cores or warp intrinsics, and will understand where these tools can be most impactful.

  • Experience writing high-performance parallel C++. Bonus if done within an ML context with PyTorch, like for data loading, data processing, inference code

  • Experience building inference / demo prototype code (incl. Gradio, Docker etc.)

At Luma AI, we believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

We will deploy these systems to make a new kind of intelligent creative partner that can imagine with us. Free and away from the pressure of being creative. It's for all of us whose imaginations have been constrained, who've had to channel vivid dreams through broken words, hoping others will see what we see in our mind's eye. A partner that can help us show — not just tell.

Dream Machine is an early step to building that. Try it here

Why you should join us:

  • Luma is bringing together the best team in the world to achieve our goal, from researchers to engineers and designers to growth operators

  • Luma is not just a lab - we are deeply product focused and our vision merging AI models and delightful products is unique in the industry

  • We build. We ship. Our early products have been wildly successful

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: CUDA Docker GPU Gradio Machine Learning PyTorch Research

Region: North America
Country: United States

More jobs like this