Machine Learning Software Engineering Intern

Santa Clara, Ca

d-Matrix

d-Matrix delivers efficient AI computing solutions for large language models, optimizing AI inference with enhanced memory bandwidth.

View all jobs at d-Matrix

Apply now Apply later

d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.

Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.  

Location:

Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.

The role: Machine Learning Software Engineering Intern

What you will do:

The Software Team at d-Matrix is looking for an ML Software Engineering Intern to join the team. Location can be Santa Clara / remote. You will be joining a team of exceptional professionals enthusiastic about tackling some of the biggest challenges of AI compute. In this role, you will work on either or multiple of the following domains: (1) develop performant implementations of SOTA ML models such as LLaMA, GPT, BERT, DLRM, etc. (2) You will develop and maintain tools for performance simulation, analysis, debugging, profiling. (3) You will develop AI infra software such as kernel compiler, inference engine, model factory, etc. (4) develop QA systems/automation software. You will engage and collaborate with the rest of the SW team to meet development milestones. You will also contribute to publication of papers and intellectual properties as applicable.

What you will bring:

Minimum:

  • Enrolled in a Bachelor's degree in Computer Science, Electrical and Computer Engineering, or a related scientific discipline.

  • A problem solver, be able to break-down and simplify complex problems to come up with elegant and efficient solutions

  • Proficient in programming with either Python/C/C++ programming languages.

Desired:

  • Enrolled in either a MS or PhD in Computer Science, Electrical and Computer Engineering, or a related scientific discipline.

  • Understanding of CPU / GPU architectures and their memory systems.

  • Experience with specialized HW accelerators for deep neural networks.

  • Experience developing high performance kernels, simulators, debuggers, etc. targeting GPUs/Other accelerators.

  • Experience using Machine Learning frameworks, like PyTorch (preferrable), Tensorflow, etc.

  • Experience with Machine Learning compilers, like MLIR (preferrable), TVM etc.

  • Experience deploying inference pipelines. Experience using or developing inference engines such as vLLM, TensorRT-LLM.

  • Passionate about AI and thriving in a fast-paced and dynamic startup culture.

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Apply now Apply later
Job stats:  7  2  0

Tags: Architecture BERT Computer Science Engineering GPT GPU LLaMA LLMs Machine Learning ML models PhD Physics Pipelines Python PyTorch TensorFlow TensorRT vLLM

Perks/benefits: Career development Startup environment

Region: North America
Country: United States

More jobs like this