Director, AI Security & Technology

Chicago, IL

IMO Health

From clinical terminology to streamlined workflows to data standardization, we enable insights that help improve patient care across the healthcare ecosystem.

View all jobs at IMO Health

Apply now Apply later

At IMO Health, we’re seeking a visionary Director, AI Security & Technology to lead efforts in securing our AI/ML platforms. This high-impact role blends AI/ML security expertise with DevSecOps practices to protect our models, data, and infrastructure across cloud deployments. You’ll design and implement security measures that ensure compliance with healthcare regulations and promote Responsible AI principles. We use large and small language models to power features that improve clinical documentation within our SaaS products. As part of our team, you’ll help ensure these systems are secure, aligned with governance standards, and built to withstand emerging threats — playing a key role in how clinicians and patients benefit from responsible AI. 

WHAT YOU'LL DO:

  • Lead AI Security Strategy: Define the company’s AI security vision and build the roadmap for protecting AI/ML models, data pipelines, and inference systems. 
  • Model Risk Management: Identify and mitigate AI-specific threats like prompt injection, model leakage, data poisoning, and adversarial input risks. 
  • Secure Model Development & Deployment: Embed security into the LLM lifecycle — from data sourcing and training through fine-tuning, deployment, and updates. 
  • Governance & Compliance: Develop and enforce policies aligned with HIPAA, HITRUST, NIST AI RMF, and emerging AI regulations. Guide teams on responsible AI use. 
  • Cross-Functional Governance Collaboration: Work closely with internal governance stakeholders and teams (e.g., product, IT, and platform teams) to align on shared ownership of AI risk mitigation. 
  • AI Maturity & Readiness Evaluation: Lead an assessment of organizational maturity from a security lens - including skills, workflows, tools, and infrastructure - and drive a clear, actionable plan to close security gaps. 
  • Enable Customer Deployment Security: Create security guidelines, patterns, and support for customers deploying our models in their own environments. 
  • Monitor Emerging AI Threats: Stay current on AI vulnerabilities, red-teaming techniques, and global AI regulatory trends — and translate those into practical controls and guidance. 
  • Collaborate Across Teams: Partner closely with ML, DevOps, product, and compliance teams to make security scalable and usable — without slowing innovation. 

WHAT YOU'LL NEED:

  • 8+ years in information security with at least 2 years focused on AI/ML systems 
  • Deep understanding of LLMs and AI-specific risks 
  • Familiarity with securing MLOps workflows and model-serving infrastructure (e.g., GCP, Azure, AWS) 
  • Experience with threat modeling and mitigating attacks on AI models (e.g., prompt injection, inversion, poisoning) 
  • Knowledge of healthcare security requirements (HIPAA, HITRUST) and how they apply to AI/ML 
  • Strong communication skills — able to educate, influence, and guide both technical and executive audiences 
  • Experience working with or securing customer-hosted software solutions 

NICE TO HAVE:

  • Hands-on experience red-teaming LLMs or building secure RAG (retrieval-augmented generation) systems 
  • Understanding of AI-specific policy and regulation (e.g., EU AI Act, Executive Orders, NIST AI RMF) 
  • Passion for shaping safe and ethical AI in a high-impact domain like healthcare 
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: AWS Azure Data pipelines DevOps GCP LLMs Machine Learning ML models MLOps Pipelines RAG Responsible AI Security

Region: North America
Country: United States

More jobs like this