Research Scientist Intern, Systems ML - SW/HW Co-Design - Inference
Menlo Park, CA
Meta
Giving people the power to build community and bring the world closer together
AI System SW/HW Co-design team’s mission is to explore, develop and help productize high-performance software and hardware technologies for AI at datacenter scale. We achieve this via concurrent design and optimization of many aspects of the system such as models, algorithms, numerics, performance and AI hardware including compute, networking and storage. In essence, we drive the AI HW roadmap at Meta and ensure our existing and future AI workloads and software are well optimized and suited for the hardware infrastructure.
Meta is seeking Research Scientist Interns to join our AI & Systems Co-Design HPC & Inference team to drive the definition of our next-generation AI Systems Inference and Training architectures. The team works across HW types (GPUs, ASICs), workload types (Recommendation Models, LLM, LDM) and workloads (Training & Inference). The team drives innovation on:
- Low precision Numerics for Training & Inference
- ML operator / Kernel optimizations for Training & inference
- Inference E2E performance - Model, SW, System, Accelerator
- Performance modeling and simulations
- HPC Software Optimizations
- GPU / ASIC optimizations
- Software libraries, models, and frameworks
In this role, you will work cross-functionally with internal software and platforms engineering teams to understand the workloads and infrastructure requirements. You will drive technology path-finding, roadmap definition and co-design activities to deliver new capabilities and efficient systems for our fleet. You will also work with external industry partners to influence their roadmaps and build the best products for Meta’s Infrastructure.
Join our team and help shape one of the largest infrastructure footprints which powers Meta’s applications used by billions of people across the globe.
Our team at Meta AI offers twelve (12) to sixteen (16) weeks long internships and we have various start dates throughout the year. To learn more about our research, visit https://ai.facebook.com.Research Scientist Intern, Systems ML - SW/HW Co-Design - Inference Responsibilities
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Meta is seeking Research Scientist Interns to join our AI & Systems Co-Design HPC & Inference team to drive the definition of our next-generation AI Systems Inference and Training architectures. The team works across HW types (GPUs, ASICs), workload types (Recommendation Models, LLM, LDM) and workloads (Training & Inference). The team drives innovation on:
- Low precision Numerics for Training & Inference
- ML operator / Kernel optimizations for Training & inference
- Inference E2E performance - Model, SW, System, Accelerator
- Performance modeling and simulations
- HPC Software Optimizations
- GPU / ASIC optimizations
- Software libraries, models, and frameworks
In this role, you will work cross-functionally with internal software and platforms engineering teams to understand the workloads and infrastructure requirements. You will drive technology path-finding, roadmap definition and co-design activities to deliver new capabilities and efficient systems for our fleet. You will also work with external industry partners to influence their roadmaps and build the best products for Meta’s Infrastructure.
Join our team and help shape one of the largest infrastructure footprints which powers Meta’s applications used by billions of people across the globe.
Our team at Meta AI offers twelve (12) to sixteen (16) weeks long internships and we have various start dates throughout the year. To learn more about our research, visit https://ai.facebook.com.Research Scientist Intern, Systems ML - SW/HW Co-Design - Inference Responsibilities
- Develop tools and methodologies for large scale workload analysis and extract representative benchmarks (in C++/Python/Hack) to drive early evaluation of upcoming platforms.
- Analyze evolving Meta workload trends and business needs to derive requirements for future offerings. Apply in depth knowledge of how the AI/ML systems interact with the compute and storage systems around.
- Utilize extensive understanding of CPUs (x86/ARM), GPU (Nvidia/AMD/Intel), Collectives and systems to identify bottlenecks and enhance product/service efficiency. Collaborate closely with software developers to re-architect services, improve codebase through algorithm redesign, reduce resource consumption, and identify hardware/software co-design opportunities.
- Identify industry trends, analyze emerging technologies and disruptive paradigms. Conduct prototyping exercises to quantify the value proposition for Meta and develop adoption plans. Influence vendor hardware roadmap and broader ecosystem to align with Meta's roadmap requirements.
- Work with Software Services, Product Engineering, and Infrastructure Engineering teams to find the optimal way to deliver the hardware roadmap into production and drive adoption.
- Currently has, or is in the process of obtaining, PhD degree in the field of Computer Science or a related STEM field.
- Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.
- Experience with hardware architecture, compute technologies and/or storage systems
- Intent to return to degree-program after the completion of the internship/co-op.
- Track record of achieving results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as MICRO, ISCA, HPCA, ASPLOS, ATC, SOSP, OSDI, MLSys or similar.
- Architectural understanding of CPU, GPU, Accelerators, Networking, systems.
- Some experience with large-scale infrastructure, distributed systems, full stack analysis of server applications.
- Experience or knowledge in developing and debugging in C/C++, Python and/or PyTorch.
- Experience driving original scholarship in collaboration with a team.
- Experience leading a team in solving analytical problems using quantitative approaches.
- Interpersonal experience: cross-group and cross-culture collaboration.
- Experience in theoretical and empirical research and for answering questions with research.
- Experience communicating research for public audiences of peers.
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Job stats:
9
1
0
Categories:
Data Science Jobs
Machine Learning Jobs
Research Jobs
Tags: Architecture Computer Science Distributed Systems Engineering GPU HPC LLMs Machine Learning PhD Physics Prototyping Python PyTorch Research STEM VR
Perks/benefits: Career development Conferences Equity / stock options Health care Salary bonus Team events
Region:
North America
Country:
United States
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Data Engineer II jobsStaff Data Scientist jobsPrincipal Data Engineer jobsBI Developer jobsData Scientist II jobsData Manager jobsData Science Manager jobsJunior Data Analyst jobsResearch Scientist jobsBusiness Data Analyst jobsLead Data Analyst jobsData Science Intern jobsSr. Data Scientist jobsSenior AI Engineer jobsData Engineer III jobsSenior Data Scientist, Performance Marketing jobsBI Analyst jobsSoftware Engineer, Machine Learning jobsSr Data Engineer jobsData Specialist jobsJunior Data Scientist jobsJunior Data Engineer jobsSenior Artificial Intelligence/Machine Learning Engineer - Remote, Latin America jobsData Analyst Intern jobsData Engineering Manager jobs
Linux jobsSnowflake jobsEconomics jobsOpen Source jobsHadoop jobsPhysics jobsJavaScript jobsAirflow jobsComputer Vision jobsMLOps jobsRDBMS jobsKafka jobsNoSQL jobsScala jobsData Warehousing jobsBanking jobsGoogle Cloud jobsData warehouse jobsKPIs jobsGitHub jobsOracle jobsPostgreSQL jobsR&D jobsClassification jobsScikit-learn jobs
SAS jobsTerraform jobsCX jobsLooker jobsScrum jobsStreaming jobsDistributed Systems jobsPandas jobsData Mining jobsJenkins jobsRobotics jobsBigQuery jobsIndustrial jobsPySpark jobsJira jobsReact jobsMicroservices jobsdbt jobsRedshift jobsMatlab jobsUnstructured data jobsE-commerce jobsMySQL jobsGPU jobsData strategy jobs