Associate Director

Bangalore, Karnataka, India

KPMG India

Welcome to KPMG International.

View all jobs at KPMG India

Apply now Apply later

As the AI Chief Information Security Officer (AI CISO), you will lead AI Risk, Data Security, Legal & Compliance initiatives for KDN AI Labs, allowing you to shape the future of AI-driven solutions within a global professional services network. KDN AI Labs, allowing you to shape the future of AI-driven solutions within a global professional services network.

Key responsibilities include:

AI Security & Risk Management

  • using public, private, and proprietary data. Establish AI-specific security policies, risk frameworks, and threat and governance models to protect AI driven solutions
  • Develop and implement a comprehensive AI security strategy aligned with business objectives and regulatory requirements.
  • Implement secure-by-design principles in AI/ML models, ensuring robustness against adversarial attacks, bias mitigation, and data poisoning.
  • Identify AI-specific threats (e.g., model inversion, prompt injection, model leakage) and establish mitigation strategies.
  • Conduct AI threat modeling and security assessments for prototypes/MVPs.
  • Guide zero-trust architecture and secure ML Ops best practices.
AI Risk & Compliance Governance:
  • Develop and implement a global AI risk management framework for KPMG’s AI initiatives.
  • Define approval workflows, compliance protocols, and legal review processes for AI projects.
  • Implement secure AI lifecycle management, addressing risks like model poisoning, adversarial attacks, and data breaches. Ensure AI models and data processing comply with GDPR, HIPAA, NIST AI Management framework, CPRA, APRA CPS 234, and other international regulations.
  • Oversee the submission of the 16.6.4 compliance form, ensuring all AI projects undergo risk assessment before deployment.
  • Act as a liaison between AI teams and compliance, risk, and legal departments to ensure all AI-driven solutions meet regulatory standards.
  • Establish AI model validation and testing protocols to mitigate risks before full-scale deployment.
AI Data Security & Privacy
  • Define data governance standards for AI initiatives
  • Enforce data governance and privacy-by-design principles for AI models handling sensitive or PII data.
  • Oversee AI security controls to prevent data leakage, unauthorized access, and model inversion attacks.
  • Implement secure data handling and anonymization techniques to protect sensitive AI training data.
  • Ensure AI models and pipelines adhere to data privacy laws and cross-border data transfer regulations.
  • Collaborate with AI engineers and security teams to establish secure AI training, deployment, and inference environments.
  • Conduct AI security audits and penetration tests to assess vulnerabilities in AI solutions.
AI Legal & Regulatory Advisory
  • Provide legal risk assessments for AI initiatives across Tax, Audit, and Advisory services.
  • Guide AI teams on intellectual property (IP) protection, licensing, and fair AI use policies.
  • Ensure AI models adhere to ethics and bias mitigation standards as per global AI regulations.
  • Monitor emerging AI laws and regulations and advise leadership on necessary compliance updates.
  • Engage with regulatory bodies, industry groups, and cybersecurity alliances to shape AI security standards.
  • Lead AI security audits, governance reviews, & compliance assessments.
AI Risk Strategy & Secure Adoption
  • Work closely with the AI Technology Architect to ensure secure AI deployment with agentic AI adoption.
  • Advise business leaders on AI governance and compliance strategies to maximize AI innovation while mitigating risk.
  • Identify best-in-class AI risk management tools (both open-source and proprietary) to enhance KPMG’s AI security posture.
  • Define AI security guardrails for development teams working on LLMs, autonomous AI agents, and generative AI solutions.
  • Secure AI workloads, APIs, and AI-as-a-Service deployments on cloud platforms (AWS, Azure, GCP).
Data Security & Privacy Compliance
  • Ensure AI data governance, including data residency, encryption, anonymization, and access controls for sensitive AI datasets.
  • Align AI solutions with GDPR, CCPA, HIPAA, ISO 27001, NIST AI RMF, and industry-specific AI security frameworks.
  • Define AI data lineage, ownership, and lifecycle security measures.
  • Collaborate with data privacy teams to implement privacy-preserving AI techniques (e.g., differential privacy, federated learning).
Legal and Regulatory Compliance for AI
  • Interpret AI regulatory frameworks (EU AI Act, US AI EO, UK AI Safety Standards, etc.) and translate them into implementation strategies.
  • Establish legal guardrails for AI model explainability, auditability, and fairness. Work with legal teams to ensure intellectual property protection for AI models and third-party AI risk management.
  • Review AI contracts, licensing agreements, and third-party AI APIs for security and compliance risks.
Hands-On AI Security Guidance for Tech Teams
  • Act as a trusted advisor for AI engineers, guiding them on secure coding, AI security tools, and best practices.
  • Lead AI security architecture reviews & enforce secure ML Ops pipelines.
  • Implement AI Red Teaming exercises to test model resilience and adversarial robustness.
  • Support secure deployment strategies (e.g., cloud security, containerized AI environments, and model access controls).
AI Security Incident Response & Monitoring
  • Establish an AI-specific incident response framework for detecting and responding to AI-related security threats.
  • Implement continuous monitoring of AI systems for drift, anomalies, and adversarial exploitation.
  • Leverage AI-powered security tools (e.g., AI-driven SIEM, anomaly detection, and ML security scanners).

Technical & Security Expertise

  • 10+ years of experience in cybersecurity, AI risk, data security, or related fields and 5+ years of experience in AI/ML security, model governance, or AI compliance.
  • Strong understanding of MLOps security, AI adversarial threats, model poisoning , data exfiltration and AI risk frameworks.
  • Hands-on experience with AI security tools (e.g., ModelScan, RobustML, Microsoft Purview, IBM AI OpenScale).
  • Experience securing ML pipelines, LLMs, and AI APIs.
  • Deep knowledge of cryptographic techniques for AI security (homomorphic encryption, secure multi-party computation, differential privacy, etc.).
  • Familiarity with secure AI coding practices (e.g., Python, TensorFlow, PyTorch, LangChain security best practices).

Legal & Compliance Knowledge

  • In-depth understanding of global AI regulations and standards (EU AI Act, NIST AI RMF, ISO 42001, GDPR, CCPA, etc.)
  • Experience in legal assessments of AI bias, fairness, and explainability.
  • Knowledge of intellectual property rights, AI contracts, and AI risk audits.

Leadership & Advisory Skills

  • Experience in advising AI development teams, guiding security reviews, and implementing compliance-driven AI solutions.
  • Ability to translate complex security and legal concepts into actionable AI governance strategies and into business impact for Business units
  • Strong cross-functional collaboration with technology, legal, compliance, and risk management teams.
  • Drive a culture of security-first AI development across the organization.

Preferred Certifications

  • CISSP, CCSP, CISM, CISA (Security & Risk Certifications)
  • Certified AI Governance Professional (CAIGP), ISO 42001 Lead Auditor (AI Compliance & Governance)
  • Azure AI Security, Google ML Security Specialist (Cloud AI Security)
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: AI governance APIs Architecture AWS Azure Data governance GCP Generative AI ISO 27001 LangChain LLMs Machine Learning ML models MLOps MVP Open Source Pipelines Privacy Python PyTorch Security TensorFlow Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this