Research Scientist Graduate (High-Performance Computing (Inference Optimization) - Vision AI Platform-Seattle) - 2025 Start (PhD)

Seattle

ByteDance

ByteDance is a technology company operating a range of content platforms that inform, educate, entertain and inspire people across languages, cultures and geographies.

View all jobs at ByteDance

Apply now Apply later

Responsibilities

About Doubao (Seed)
Founded in 2023, the ByteDance Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements.
With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, we have developed a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and is available to external enterprise clients via Volcano Engine. Today, the Doubao app stands as the most widely used AIGC application in China.

Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible. Together, we inspire creativity and enrich life - a mission we aim towards achieving every day. To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always. At ByteDance, we create together and grow together. That's how we drive impact-for ourselves, our company, and the users we serve. Join us.

Team Introduction
The Doubao (Seed) Vision AI Platform team focuses on the end-to-end infrastructure development and efficiency improvement for Seed vision-based large model development, including the data pipeline construction and training, evaluation data delivery, and full lifecycle efficiency enhancement for visual large models such as VLM, VGFM, and T2I. This also encompasses large-scale training stability and optimization for acceleration, as well as large model inference and multi-machine multi-card deployment.

We are looking for talented individuals to join our team in 2025. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities. Co-create a future driven by your inspiration with ByteDance.
Successful candidates must be able to commit to an onboarding date by end of year 2025.
We will prioritize candidates who are able to commit to the company start dates. Please state your availability and graduation date clearly in your resume.
Applications will be reviewed on a rolling basis. We encourage you to apply early.
Candidates can apply for a maximum of TWO positions and will be considered for jobs in the order you applied for. The application limit is applicable to ByteDance and its affiliates' jobs globally.

Responsibilities:
1. Design and develop next-generation large model inference engines, optimizing GPU cluster performance for image/video generation and multimodal models to achieve industrial-grade low-latency & high-throughput deployment.
2. Lead inference optimization including CUDA/Triton kernel development, TensorRT/TRT-LLM graph optimization, distributed inference strategies, quantization techniques, and PyTorch-based compilation (torch.compile).
3. Build GPU inference acceleration stack with multi-GPU collaboration, PCIe optimization, and high-concurrency service architecture design.
4. Collaborate with algorithm teams on performance bottleneck analysis, software-hardware co-design for vision model deployment, and AI infrastructure ecosystem development.

Qualifications

Minimum Qualifications:
1. Bachelor's/Master's or above in Computer Science/EE/related fields.
2. Proficient in C++/Python and high-performance coding.
3. Expertise in ≥1 domains: GPU programming (CUDA/Triton/TensorRT), model quantization (PTQ/QAT), parallel computing (multi-GPU/multi-node inference), or compiler optimization (TVM/MLIR/XLA/torch.compile).
4. Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.

Preferred Qualifications:
1. Experience in large-scale inference systems, vLLM/TGI customization, advanced quantization/sparsity;

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

ByteDance Inc. is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2

Job Information

【For Pay Transparency】Compensation Description (Annually)

The base salary range for this position in the selected city is $177688 - $266000 annually.​

Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.​

Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).​

The Company reserves the right to modify or change these benefits programs at any time, with or without notice.​

For Los Angeles County (unincorporated) Candidates:​

Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:​

1. Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;​

2. Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; and​

3. Exercising sound judgment.​

Apply now Apply later

Tags: Architecture Computer Science CUDA Deep Learning GPU Industrial LLMs ML infrastructure ML models Model deployment Model inference PhD Python PyTorch Reinforcement Learning Research TensorRT vLLM

Perks/benefits: 401(k) matching Career development Equity / stock options Health care Insurance Medical leave Parental leave Salary bonus Startup environment Transparency

Region: North America
Country: United States

More jobs like this