ML Compute Acceleration Engineer

Waltham, Massachusetts, United States

Apple

We’re a diverse collective of thinkers and doers, continually reimagining what’s possible to help us all do what we love in new ways.

View all jobs at Apple

Apply now Apply later

Summary

Posted: Oct 2, 2024

Role Number:200571226

Apple’s Compute Frameworks team in GPU, Graphics and Displays org provides a suite of high-performance data parallel algorithms for developers inside and outside of Apple for iOS, macOS and Apple TV. Our efforts are currently focused in the key areas of linear algebra, image processing, machine learning, along with other projects of key interest to Apple. We are always looking for exceptionally dedicated individuals to grow our outstanding team to lay the foundation of technologies like Apple Intelligence.

Description


Our team is seeking extraordinary machine learning and GPU programming engineers who are passionate about providing robust compute solutions for accelerating machine learning networks on Apple Silicon using GPU and Neural Engine. Role has the opportunity to influence the design of compute and programming models in next generation GPU and Neural Engine architectures. Responsibilities: * Adding optimizations in machine learning computation graph. * Defining and implementing APIs in Metal Performance Shaders Graph, investigating new algorithms. * Developing and maintaining MLIR dialect in Apple and open source with upgrades using latest LLVM. * Performing in-depth analysis, compiler and kernel level optimizations to ensure the best possible performance across hardware families. * Tune GPU and Neural Engine accelerated compute across products. * Tuning the cost model and optimizing runtime dispatch to multiple IPs to get best performance on Apple Silicon. Intended deliverables: * GPU Compute acceleration technology. * Apple Intelligence implementation and acceleration. * Optimized compute graphs across products. If this sounds of interest, we would love to hear from you!

Minimum Qualifications


  • Proven programming and problem-solving skills.
  • Good understanding of machine learning fundamentals.
  • GPU compute programming models & optimization techniques.
  • GPU compute framework development, maintenance, and optimization.
  • Experience with system level programming and computer architecture.
  • Experience with high performance parallel programming, GPU programming or LLVM/MLIR compiler infrastructure is a plus.


Preferred Qualifications


  • Background in mathematics, including linear algebra and numerical methods.
  • Strong communication and collaboration skills.
  • Strong background of building high performance, production quality software on schedule.
  • Experience with compiler technologies.
  • Experience with adding computational graph support, runtime or device backend to Machine learning libraries (TensorFlow, PyTorch or JAX) support is a plus.



  • Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.




Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture GPU JAX Linear algebra Machine Learning Mathematics Open Source PyTorch TensorFlow

Region: North America
Country: United States

More jobs like this