AI Governance Associate Director

USA - Waltham, MA, United States

Wolters Kluwer

Wolters Kluwer is a global provider of professional information, software solutions, and services.

View all jobs at Wolters Kluwer

Apply now Apply later

NOTE: This is a hybrid position requiring 8 days per month at an approved Wolters Kluwer location.


The AI Governance Associate Director will serve as a key leader in the evolution and execution of the enterprise AI governance framework. This role is designed for a highly experienced governance professional who brings both strong strategic intuition and operational rigor. The ideal candidate is capable of navigating complex use cases, leading cross-functional discussions with conviction, and translating regulatory and ethical requirements into structured, scalable governance solutions.

This individual will take ownership of the communication, education, and socialization of the AI governance framework across the enterprise, ensuring that stakeholders at all levels understand their roles, responsibilities, and the value of compliant and responsible AI development. This includes both the initial rollout and ongoing updates as the framework matures.

Key Responsibilities

  • Lead the design, implementation, and continual refinement of AI governance workflows, policies, and controls to support evolving business needs and regulatory developments.
  • Serve as a senior governance authority and educator, responsible for communicating and socializing the AI governance framework across functions, including the delivery of onboarding materials, roadshows, and stakeholder briefings.
  • Act as a thought partner to AI use case owners, translating high-level principles into actionable governance requirements while enabling innovation.
  • Facilitate and lead complex, high-stakes governance discussions with Legal, Audit, IT Security, and Compliance, often involving novel risks or ethical challenges.
  • Drive alignment with model governance standards, leveraging deep familiarity with frameworks like SR 11-7 and adapting them to modern AI risks.
  • Partner with internal teams (e.g. Legal, Security) to ensure governance control effectiveness and implement enhancements in response to audit findings.
  • Author and maintain governance artifacts (e.g., model card templates, risk assessment templates, exceptions, escalation memos) aligned to regulatory and organizational standards.
  • Oversee creation and institutionalization of SOPs, governance workflows, and decision-making pathways, ensuring consistent implementation.
  • Monitor and interpret regulatory changes (e.g., EU AI Act, NIST AI RMF, GDPR), and lead timely adjustments to internal governance frameworks.
  • Maintain structured logs of governance questions, interpretations, and action items to ensure transparency and continuity.
  • Lead integrations between the AI governance framework and internal tooling, including:
    • Working directly with the AI Enabling Team to align the AI Tracker with system workflows.
    • Partnering with UX, Legal, IT Security, and other functions to embed governance into the development pipeline.
  • Champion AI-enabled enhancements to governance operations (e.g., risk tagging, self-service guidance, automation).
  • Contribute to the long-term roadmap for agentic AI integration, ensuring governance integrity remains intact even as autonomy increases.

Qualifications & Skills

Education

  • Bachelor’s degree required; Master’s in a quantitative field, Law, Public Policy, Risk Management, or Business Administration strongly preferred.

Experience

  • 7+ years of experience in AI governance, model governance, compliance, risk, audit, or a similar function.
  • Demonstrated success in building and rolling out governance frameworks, driving adoption across diverse stakeholders.
  • Extensive experience with cross-functional integration projects, including IT, Legal, Security, and business units.
  • Strong track record of educating and influencing teams, especially around new or evolving governance processes.
  • A plus: Deep familiarity with regulated industries and well-established model risk governance (e.g., SR 11-7 frameworks).

Core Competencies

  • Deep understanding of AI and ML risks, such as bias, transparency, monitoring, and explainability.
  • Strong analytical and documentation skills, with the ability to write clear, regulatory-grade governance artifacts.
  • Familiarity with PowerApps, Tableau, workflow automation tools, and governance technology platforms.
  • Excellent communication and change management skills, especially when guiding teams through ambiguity or evolving regulatory landscapes.
  • Exceptional multitasking and prioritization skills, with the ability to manage multiple governance initiatives, stakeholder requests, and regulatory updates simultaneously without losing focus or quality.
  • Clear, persuasive communicator across technical, legal, and executive audiences, capable of translating abstract governance concepts into actionable language and fostering alignment across diverse stakeholders.

Mindset & Approach

  • Comfort with ambiguity, novelty, and incomplete guidance—able to construct governance structures from the ground up when none exist.
  • Ability to manage and lead through “known unknowns” and “unknown unknowns”, with a pragmatic mindset grounded in risk prioritization.
  • A structured, diplomatic thinker who balances compliance needs with business goals and user experience.

Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.

Compensation:

Target salary range CA, CT, CO, DC, HI, IL, MD, MN, NY, RI, WA: $183,700 - $260,050
Apply now Apply later

Tags: AI governance Machine Learning Responsible AI Security Tableau UX

Perks/benefits: Transparency

Region: North America
Country: United States

More jobs like this