Compiler Software Engineer Intern

Bangalore, Karnataka, India

d-Matrix

d-Matrix delivers efficient AI computing solutions for large language models, optimizing AI inference with enhanced memory bandwidth.

View all jobs at d-Matrix

Apply now Apply later

d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.

Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.  

Location:

Hybrid, working onsite at our Bengaluru, Karnataka office 3-5 days per week.

The Role: Compiler Software Engineer Intern

What you will do:

The Compiler Team at d-Matrix is responsible for developing the software that performs the logical-to-physical mapping of a graph expressed in an IR dialect (like Tensor Operator Set Architecture (TOSA), MHLO or Linalg) to the physical architecture of the distributed parallel memory accelerator used to execute it. It performs multiple passes over the IR to apply operations like tiling, compute resource allocation, memory buffer allocation, scheduling and code generation. You will be joining a team of exceptional people enthusiastic about developing state-of-the-art ML compiler technology. This internship position is for 3 months.


In this role you will design, implement and evaluate a method for managing floating point data types in the compiler. You will work under the guidance of two members of the compiler backend team. One, is an experienced compiler developer based in the West Coast of the US.

You will engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.

Successful completion of the project will be demonstrated by a simple model output by the compiler incorporating the your code that executes correctly on the hardware instruction set architecture (ISA) simulator. This model incorporates various number format representations for reduced precision floating point.

What you will bring:

•    Bachelor’s degree in computer science or equivalent 3 years towards an Engineering degree with emphasis on computing and mathematics coursework.
•    Proficiency with C++ object-oriented programming is essential.
•    Understanding of fixed point and floating-point number representations, floating point arithmetic, reduced precision floating point representations and sparse matrix storage representations and the methods used to convert between them.
•    Some experience in applied computer programming (e.g. prior internship).
•    Understanding of basic compiler concepts and methods used in creating compilers (ideally via a compiler course).
•    Data structures and algorithms for manipulating directed acyclic graphs.

Desired:
•    Familiarity of sparse matrix storage representations.
•    Hands on experience with CNN, RNN, Transformer neural network architectures
•    Experience with programming GPUs and specialized HW accelerator systems for deep neural networks.
•    Passionate about learning new compiler development methodologies like MLIR.
•    Enthusiastic about learning new concepts from compiler experts in the US and a willingness to defeat the time zone barriers to facilitate collaboration.

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Apply now Apply later
Job stats:  6  1  0
Category: Engineering Jobs

Tags: Architecture Computer Science Engineering LLMs Machine Learning Mathematics OOP Physics RNN

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this