LLM Training Resilience Engineer

San Francisco

Together AI

Run and fine-tune generative AI models with easy-to-use APIs and highly scalable infrastructure. Train & deploy models at scale on our AI Acceleration Cloud and scalable GPU clusters. Optimize performance and cost.

View all jobs at Together AI

Apply now Apply later

About Us

Together.ai is at the forefront of AI infrastructure development, creating robust platforms and frameworks to support state-of-the-art large-scale machine learning training. We specialize in delivering resilient, high-performance systems that power breakthroughs in AI research and deployment.

We are seeking a Large-scale Training Resilience Engineer to ensure the reliability, fault tolerance, and scalability of our large-scale training infrastructure. If you are passionate about solving complex distributed systems problems and building highly available AI training pipelines, this role is for you.

 

Responsibilities

  • Resilience and Fault Tolerance Design:
    • Develop systems to identify, isolate, and recover from failures in large-scale distributed training workloads.
    • Implement proactive error-detection mechanisms, including straggler detection and fault prediction algorithms.
  • Distributed System Optimization:
    • Ensure stability and consistency across distributed training clusters (e.g., GPU/TPU clusters).
    • Optimize recovery time and throughput in the face of hardware or software failures.
  • Monitoring and Observability:
    • Design and maintain observability systems for monitoring cluster health, training performance, and failure patterns.
    • Leverage telemetry data to improve incident response and automate mitigation strategies.
  • Automation and Tooling:
    • Build resilience-focused tooling, such as job health monitors, distributed checkpoint systems, and automated recovery workflows.
    • Enhance debugging and diagnosis frameworks for distributed training jobs.
  • Collaboration and Documentation:
    • Collaborate with platform engineers, researchers, and ML practitioners to identify pain points and resilience requirements.
    • Document and communicate best practices for fault-tolerant AI training.

 

Qualifications

Must-Have:

  • Experience:
    • 5+ years of experience in distributed systems, cloud infrastructure, or large-scale machine learning training.
  • Technical Skills:
    • Proficiency in distributed computing frameworks (e.g., PyTorch DDP, TensorFlow, Horovod).
    • Strong knowledge of resilience strategies in distributed systems (e.g., leader election, consensus, retry mechanisms).
    • Hands-on experience with observability tools (e.g., Prometheus, Grafana, ELK stack).
  • Programming:
    • Proficient in Python, Go, or a similar programming language.
  • Infrastructure:
    • Experience working with cloud platforms (e.g., AWS, GCP, Azure) and Kubernetes for workload orchestration.
  • Soft Skills:
    • Strong analytical, problem-solving, and debugging skills.
    • Excellent collaboration and communication skills.

Nice-to-Have:

  • Familiarity with GPU/TPU cluster management and scheduling.
  • Experience with high-availability database systems or message queues.
  • Experience with open-source contributions or community engagement.

 

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

 

Please see our privacy policy at https://www.together.ai/privacy  

 

Apply now Apply later
Job stats:  0  0  0

Tags: AWS Azure DDP Distributed Systems ELK GCP GPU Grafana Horovod Kubernetes LLMs Machine Learning ML infrastructure Open Source Pipelines Privacy Python PyTorch Research TensorFlow

Perks/benefits: Career development Competitive pay Equity / stock options Health care Insurance Startup environment

Region: North America
Country: United States

More jobs like this