AI Security Testing Lead, X-Force Red
Multiple Cities
IBM
For more than a century, IBM has been a global technology innovator, leading advances in AI, automation and hybrid cloud solutions that help businesses grow.At IBM, work is more than a job – it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.
Your Role and Responsibilities
As Head of AI Security Testing, you’ll be responsible for conducting applied research to develop novel ways to manipulate and breach AI systems, focused on meaningful impacts to safety and security. Joining a team of hackers, you’ll lead the AI security testing practice, helping to develop cutting-edge testing methodology and tooling for performing testing of GenAI applications, integrations, and API endpoints for security issues.
Taking a wider view of AI Red Teaming, you’ll lead or contribute to existing research for attacking the end-to-end AI ecosystem, targeting MLSecops platforms, assessing ML models for safety and security issues, attacking AI-as-a-Service platforms, and perform testing of GenAI applications, integrations, and API endpoints for security issues before production. You’ll help expand our existing internal AI training initiatives and provide select training to our wider team of hackers to ensure they keep ahead of technology advancements to effectively assess AI systems.
As the face of AI Red Teaming and AI security thought leadership in IBM X-Force, you will discuss strategies for securing and defending AI systems with key customers, while enabling other technical team members to have customer conversations on your behalf. Working with product offering management, you’ll help to refine our AI security offerings to balance manual and automated testing within MLSecOps. You will collaborate closely with our X-Force Offensive Research (XOR), X-Force Adversary Services, and X-Force Red teams to conduct practical research focused on real-world customer impact, leading research on attacking GenAI and identifying novel ways to achieve malicious code execution, unauthorized actions, and data theft. You’ll also collaborate with other IBM AI-focused teams including watsonx and IBM Research.
Simulating sophisticated threat actors takes industry leading offensive research, advanced capabilities, and mature methodology. We believe offensive AI research is essential for both simulating various sophistication levels of threat actors and enabling defenders to better understand, defend, and respond to attacks. IBM’s X-Force Adversary Services team is considered one of the top teams in the industry because we leverage Continuous Capability Development and Delivery (C2D2) to drive research, new tools, and develop mature Standard Operation Procedures (SOPs) and to ensure all operators are delivering red team exercises to the highest technical standards. We leverage automation and AI in targeting, tasking, and analysis to free up our human operators to solve the more interesting challenges for hacking the world’s largest banks, defense contractors, and critical industries.
We are looking for individuals that are driven, proactive, thorough, and forward looking, and most of all, know what’s needed to be part of an effective team.
Responsibilities of the Role:
- Solving problems that do not have known solutions
- Help develop methodologies for offensive AI design, implementation, and testing
- Help develop offensive AI tooling and frameworks
- Researching threats, vulnerabilities, and exploit techniques within AI technologies
- Incorporate feedback loops with peers on AI research and tooling
- Provide guidance and offense-related insights throughout IBM on AI technologies
Required Technical and Professional Expertise
Competencies required:
- Ability to collaborate effectively with team members
- Strong written and verbal communication skills in English
- Strong creative problem-solving skills
- Experience with offensive use of generative AI and large language models
- Experience developing software used within enterprise environments
- Experience developing offensive tooling or frameworks
Required Technical and Professional Expertise:
- Experience attacking AI systems:
- Experience with Model Evasion, Extraction, Inversion, Poisoning attacks as well as LLM Prompt Injection
- Attacking RAG interfaces, deployment orchestrators, and integrations with associated XaaS platform infrastructure
- Strong application security testing experience
- Assessing the potential impact of backdoored or compromised model or AI application environment and validate detections for attacks against datasets.
- History of published AI security testing tools, blogs, CVEs, or conference talks
- 3+ years coding in two or more programming languages (Python, C#, C/C++, Assembly, Rust)
Preferred Technical and Professional Expertise
- Testing of within the MLSecOps pipeline and production environments for attack paths from an adversary’s perspective.
- Focused security testing on SaaS and PaaS platforms leveraged by GenAI applications to insecure security configurations and integrations with AI platforms such as Amazon SageMaker, Azure ML, BigML, Watsonx.ai, etc.
- Attacking MLSecOps training and production environments including targets such as MLflow, Kubeflow, Apache Airflow, H2O.ai and TensorFlow.
- Offensive use of AI agents and workflows- experience evaluating AI models and creating test harnesses for offensive use
- 5+ years of adversary tradecraft industry experience
- History of developing open-source software for the security community
- History of presenting at security conferences
- Experience with Adversarial Robustness Toolbox, TextAttack, Augly, Garak, Pyrit, etc.
- Track record in vulnerability research and CVE assignments related adversarial ML
- Experience with network protocols and packet capture
- Knowledge of Linux internals, Active Directory, Mac, Windows workstations and servers
- Relevant certifications from organizations like Offensive Security’s OSCE, SANS’ GXPN, or CREST’s CSAT/CSAM or demonstrable equivalent skills
- Knowledgeable of the phases of software development, from gathering requirements to deployment (SDLC)
- Experience with enterprise data lakes, relational/vector databases, complex data structures and data analysis tools, offensive data schema development and format conversations
- Experience using and validating AI-as-a-Service platforms such as with AI platforms such as Amazon SageMaker, Azure ML, BigML, Watsonx.ai
- Prior security consulting experience
Key Job Details
Role:AI Security Testing Lead, X-Force Red Location: Multiple Locations See All Boston San Francisco Denver New York Houston Category:Consulting Employment Type:Full-Time Travel Required:No Travel Contract Type:Regular Company:(0147) International Business Machines Corporation Req ID:736264BR
Projected Minimum Salary:$153,000 per year Projected Maximum Salary:$153,000-$285,000/year per year Date Posted:November 6, 2024
Tags: Airflow APIs Azure BigML Consulting Data analysis Generative AI Kubeflow Linux LLMs Machine Learning MLFlow ML models Open Source Python RAG Research Rust SageMaker SDLC Security TensorFlow Testing
Perks/benefits: Conferences
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.