Security Engineer, AI Agent Security
New York, NY, USA
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience with security assessments or security design reviews or threat modeling.
- 2 years of experience with security engineering, computer and network security and security protocols.
- 2 years of coding experience in one or more general purpose languages.
Preferred qualifications:
- Master's or PhD degree in Computer Science or a related technical field with a specialization in Security, AI/ML, or a related area.
- Experience in AI/ML security research, including areas like adversarial Machine Learning (ML), prompt injection, model extraction, or privacy-preserving ML.
- Experience developing or evaluating security controls for large-scale systems.
- Experience in secure coding practices, vulnerability analysis, security architecture, and web security.
- Ability to contribute in security research (e.g., publications in relevant security/ML venues, CVEs, conference talks, open-source tools).
About the job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Google's Secure AI Framework (SAIF) team is at the forefront of AI Agent Security. You will pioneer defenses for systems like Gemini and Workspace AI, addressing novel threats unique to autonomous agents and Large Language Models (LLM), such as advanced prompt injection and adversarial manipulation. Your responsibilities include researching vulnerabilities, designing innovative security architectures, prototyping mitigations, and collaborating to implement scalable solutions. You will need to have strong security research/engineering skills, an attacker mindset, and systems security proficiency. You will help define secure development practices for AI agents within Google and influence the broader industry in this rapidly evolving field.
The US base salary range for this full-time position is $141,000-$202,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
- Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
- Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses.
- Collaborate with engineering and research teams to translate research findings into practical, scalable security solutions deployable across Google's agent ecosystem.
- Stay current with the state-of-the-art in AI security, adversarial ML, and related security fields through literature review, conference attendance, and community engagement.
Tags: Architecture Computer Science Engineering Gemini LLMs Machine Learning Open Source PhD Privacy Prototyping Research Security Testing
Perks/benefits: Career development Equity / stock options Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.