Senior Staff ML Engineer - AI Safety & Evaluation

San Jose, California, United States

A10 Networks

Security solutions, threat intelligence, infrastructure, and application delivery for enterprises and service providers for on-premises, multi cloud, and edge cloud.

View all jobs at A10 Networks

Apply now Apply later

Senior Staff ML Engineer - AI Safety & Evaluation

About the Team
We’re building a future where AI systems are not only powerful but safe, aligned, and robust against misuse. Our team focuses on advancing practical safety techniques for large language models (LLMs) and multimodal systems—ensuring these models remain aligned with human intent and resist attempts to produce harmful, toxic, or policy-violating content.

We operate at the intersection of model development and real-world deployment, with a mission to build systems that can proactively detect and prevent jailbreaks, toxic behaviors, and other forms of misuse. Our work blends applied research, systems engineering, and evaluation design to ensure safety is built into our models at every layer.

 

About the Role
We’re looking for a Senior Staff Engineer to help lead our efforts in designing, building, and evaluating next-generation safety mechanisms for foundation models. You’ll guide a team of research engineers focused on scaling safety interventions, building tooling for red teaming and model inspection, and designing robust evaluations that stress-test models in realistic threat scenarios.

 

What You’ll Do

  • Lead the development of model-level safety defenses to mitigate jailbreaks, prompt injection, and other forms of unsafe or non-compliant outputs
  • Design and develop evaluation pipelines to detect edge cases, regressions, and emerging vulnerabilities in LLM behavior
  • Contribute to the design and execution of adversarial testing and red teaming workflows to identify model safety gaps
  • Support fine-tuning workflows, pre/post-processing logic, and filtering techniques to enforce safety across deployed models
  • Work with red teamers and researchers to turn emerging threats into testable evaluation cases and measurable risk indicators
  • Stay current on LLM safety research, jailbreak tactics, and adversarial prompting trends, and help translate those into practical defenses for real-world products 

 

What We’re Looking For

  • 5+ years of experience in machine learning or AI systems, with 2+ years in a technical leadership capacity
  • Experience integrating safety interventions into ML deployment workflows (e.g., inference servers, filtering layers, etc.)
  • Good understanding of transformer-based models and experience with LLM safety, robustness, or interpretability
  • Strong background in evaluating model behavior, especially in adversarial or edge-case scenarios
  • Strong communication skills and ability to drive alignment across diverse teams
  • Bachelor’s, Master’s, or PhD in Computer Science, Machine Learning, or a related field
     

Compensation up to 192K

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Computer Science Engineering LLMs Machine Learning ML models PhD Pipelines Prompt engineering Research Testing

Region: North America
Country: United States

More jobs like this