Research Scientist, Large-Scale Learning
San Francisco, Amsterdam, London
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Full Time Senior-level / Expert USD 225K - 300K
Together AI
Run and fine-tune generative AI models with easy-to-use APIs and highly scalable infrastructure. Train & deploy models at scale on our AI Acceleration Cloud and scalable GPU clusters. Optimize performance and cost.About Model Shaping
The Model Shaping team at Together AI works on products and research for tailoring open foundation models to downstream applications. We build services that allow machine learning developers to choose the best models for their tasks and further improve these models using domain-specific data. In addition to that, we develop new methods for more efficient model training and evaluation, drawing inspiration from a broad spectrum of ideas across machine learning, natural language processing, and ML systems.
About the Role
As a Research Scientist in Large-Scale Learning, you will work on the methods for increasing the efficiency of training foundation models, in terms of both speed and resource efficiency. You will analyze the limitations of state-of-the art techniques for neural network training, as well as the unique performance challenges of Together’s training setups. Based on this analysis, you will propose and implement new approaches, targeting both algorithmic improvements and systems optimizations.
After evaluating your ideas through experimentation, you will present your findings to the global scientific community at leading ML/ML Systems conferences and collaborate with your teammates to integrate those improvements into Together’s platform.
Responsibilities
- Define and drive the research agenda around efficiency and performance of foundation model training
- Study recent results from the broader AI research community, analyzing their relevance to the team’s research directions and ongoing projects
- Conduct experiments to empirically validate your hypotheses and compare the outcomes with relevant related work
- Share your findings both internally and externally (e.g., at top-tier conferences on ML and ML Systems)
- Partner with Machine Learning Engineers to integrate advanced methods into Together’s Model Shaping platform
Requirements
- Can autonomously design, implement, and validate your research ideas
- Skilled at writing high-quality and efficient code in Python and PyTorch
- Have first-author publications at leading conferences on ML or ML Systems (ICLR, ICML, NeurIPS, MLSys)
- Are a strong communicator, ready to both discuss your research plans with other scientists and explain them to broader audience
- Follow the latest advances in relevant subfields of AI
- Passionate about seeing your research create real-world impact through Together AI's services and willing to work hands-on with production systems to achieve it
Stand-out experience:
- Algorithmic modifications of large neural network training (e.g., novel optimization algorithms or model adaptation techniques)
- Distributed optimization (including federated learning, communication-efficient optimization, and decentralized training)
- ML systems optimizations for distributed training, memory efficiency, or compute efficiency
- Writing optimized NVIDIA GPU kernels or communication collectives using NVIDIA’s networking stack (e.g., NCCL or NVSHMEM)
- Running large-scale experiments on GPU clusters
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, RedPajama, SWARM Parallelism, and SpecExec. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance, and other benefits, as well as flexibility in terms of remote work. The US base salary range for this full-time position is $225,000 - $300,000. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Tags: GPU ICLR ICML Machine Learning ML infrastructure Model training NeurIPS NLP Open Source Privacy Python PyTorch Research
Perks/benefits: Career development Competitive pay Conferences Equity / stock options Health care Insurance Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.