AI Innovation Security Researcher

Tel Aviv-Yafo, Tel Aviv District, IL

āš ļø We'll shut down after Aug 1st - try foošŸ¦ for all jobs in tech āš ļø

Cycode

Cycode’s AI-native Application Security Platform unites security and development teams with actionable, code-to-runtime context to identify, prioritize, and fix the software risk that matters.

View all jobs at Cycode

Apply now Apply later

Description

About the Role

We’re seeking an AI Innovation Security Researcher to serve as the critical link between our AI development team and our security experts. In this role, you will:

  • Translate real-world security challenges into AI-driven solutions
  • Shape prompt strategies and model workflows for security use-cases
  • Contribute to AI system development—help architect, prototype, and iterate on models and pipelines
  • Design and execute rigorous benchmarks to evaluate the performance of security-focused AI tools

Your work will power capabilities such as automated exploitability checks for SAST/SCA findings, AI-guided remediation of container vulnerabilities (e.g. Dockerfile misconfigurations, unsafe downloads), and detection/analysis of data leaks. You’ll also help amplify our thought leadership by authoring blogs and delivering conference talks on cutting-edge AI-security topics.

Key Responsibilities

Research & Benchmarking

  • Define evaluation frameworks for AI models tackling security tasks
  • Build test suites for exploitability analysis (e.g. proof-of-concept generation, severity scoring)
  • Measure and report on model accuracy, false-positive/negative rates, and robustness

AI Collaboration & Development

  • Work with ML engineers to craft and refine prompt templates for security scenarios
  • Contribute to model architecture design, fine-tuning, and deployment workflows
  • Investigate model behaviors, iterate on training data, and integrate new AI architectures as needed

Security Expertise & Tooling

  • Apply deep knowledge of static and software composition analysis (SAST/SCA)
  • Analyze container build pipelines to identify vulnerability origins and remediation paths
  • Leverage vulnerability databases (CVE, NVD), threat modeling, and risk assessment techniques

Content Creation & Evangelism

  • Write technical blog posts, whitepapers, and documentation on AI-driven security solutions
  • Present findings at internal brown-bags and external conferences
  • Mentor teammates on AI security best practices

Requirements


  • Bachelor’s or Master’s degree in Computer Science, Cybersecurity, AI/ML, or related field
  • 3+ years in security research, application security engineering
  • Hands-on with LLMs (e.g. GPT, PaLM), prompt engineering, or fine-tuning workflows
  • Proficient in Python
  • Deep understanding of SAST/SCA tools (e.g. SonarQube, Snyk) and their outputs
  • Familiarity with container security tooling (Docker, Kubernetes, Trivy)
  • Strong data analysis skills for evaluating model outputs and security telemetry
  • Excellent written and verbal communication; ability to distill complex topics for diverse audiences
  • Collaborative mindset; experience working across research, engineering, and security teams

Preferred Qualifications

  • Experience in AI/ML system development—model training, fine-tuning, and production deployment
  • Publications or presentations in AI, security, or DevSecOps venues
  • Prior work developing open-source security tools or frameworks
  • Experience with cloud security services (AWS/Azure/GCP) and infrastructure-as-code scanning
  • Familiarity with CI/CD pipelines and MLOps tooling


Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index šŸ’°

Job stats:  1  0  0

Tags: Architecture AWS Azure CI/CD Computer Science Content creation Data analysis Docker Engineering GCP GPT Kubernetes LLMs Machine Learning MLOps Model training Open Source Pipelines Prompt engineering Python Research Security

Perks/benefits: Conferences

Region: Middle East
Country: Israel

More jobs like this