Senior AI Systems Engineer
Toronto, Ontario, Canada
Benevity
Benevity's corporate purpose software offers the only integrated suite of community investment, employee, customer and nonprofit engagement solutions.Meet Benevity
Benevity is the way the world does good, providing companies (and their employees) with technology to take social action on the issues they care about. Through giving, volunteering, grantmaking, employee resource groups and micro-actions, we help most of the Fortune 100 brands build better cultures and use their power for good. We’re also one of the first B Corporations in Canada, meaning we’re as committed to purpose as we are to profits. We have people working all over the world, including Canada, Spain, Switzerland, the United Kingdom, the United States and more!
We’re looking for a Senior AI Engineer to lead the design and deployment of intelligent, scalable AI systems. In this role, you'll apply deep technical expertise across the AI/ML stack — from foundation models to system orchestration — to build real-world, production-ready applications. You’ll shape experiences powered by LLMs, retrieval systems, and intelligent automation, while contributing to a platform that prioritizes responsible AI. You’ll work cross-functionally with data scientists, product managers, and platform engineers to help steer the long-term direction of Benevity’s AI capabilities.
This role offers growth potential into a Lead AI Architect position as we scale our AI capabilities across the Benevity Impact Platform.
What you’ll do:
AI System Design & Development
- Architect and implement intelligent AI workflows for complex task execution using LLMs and other AI techniques
- Design retrieval-augmented generation (RAG) systems and integrate them with broader platform capabilities
- Build automation frameworks that orchestrate tools, APIs, and structured data using AI-driven logic
- Develop Text-to-SQL and semantic query interfaces for business and analytics users
- Implement traceable, auditable AI pipelines that prioritize explainability and reliability
- Evaluate model/system performance and iterate using systematic benchmarking approaches
Platform Integration & Infrastructure
- Lead the development of scalable, cloud-native AI services on AWS, GCP, or Azure
- Build and maintain CI/CD pipelines for continuously improving AI applications
- Optimize vector search and embedding workflows, leveraging top vector DBs
- Apply best practices in LLMOps including model versioning, telemetry, and automated evaluations
- Contribute to the evolution of AI infrastructure, including observability, compliance, and security
Collaboration & Mentorship
- Collaborate with Product, Design, and Operations teams to shape AI-enabled features across the platform
- Serve as a mentor and technical guide for junior and mid-level engineers
- Promote responsible AI practices and ensure systems meet privacy, compliance, and ethical standards
- Research, evaluate, and implement state-of-the-art techniques in LLMs and AI agents
What you’ll bring:
- A Bachelor's or Master’s in Computer Science, Engineering, or a related field
- 5+ years of software engineering experience, with 3+ years focused on AI/ML systems design
- Proven ability to deliver end-to-end AI solutions in production environments
- Deep proficiency in Python and modern frameworks (e.g., FastAPI, Flask)
- Experience with retrieval systems, embedding models, and foundation model integration
- Familiarity with LLM platforms (e.g., OpenAI, Cohere, Bedrock) and fine-tuning workflows
- Understanding of agent-based systems and external tool orchestration
- Strong foundation in NLP, including structured data interaction (e.g., Text-to-SQL)
- Hands-on experience with LLMOps tools like LangSmith, BentoML, and Weights & Biases
- Fluency in cloud-native deployment (Docker, Kubernetes, serverless)
Technical Skills & Expertise:
- Programming: Expert-level proficiency in Python, including building scalable APIs and services; experience with TypeScript, Go, or Java is a plus
- LLM & AI Frameworks: Advanced experience with Hugging Face Transformers, LangChain, OpenAI, and fine-tuning large language models; deep familiarity with frameworks like PyTorch and TensorFlow
- RAG & Embeddings: Proficient in building and optimizing RAG pipelines using vector databases (e.g., Pinecone, Weaviate, FAISS, or Qdrant) and embedding models
- MLOps & LLMOps: Hands-on experience with MLflow, Airflow, and advanced tools for LLMOps such as BentoML, LangSmith, and Weights & Biases; strong understanding of evaluation, model/version management, and prompt tuning workflows
- Cloud & Infrastructure: Proven experience deploying AI systems in production on AWS, GCP, or Azure (AWS preferred); deep understanding of Kubernetes, Docker, Terraform, and serverless deployment patterns
- System Integration: Skilled in connecting AI systems with real-world data pipelines and services, including structured databases (SQL/NoSQL), event-based systems (Kafka, Pub/Sub), and service interfaces (REST, gRPC)
- Monitoring & Observability: Skilled in using Prometheus, Grafana, Datadog, or similar for monitoring LLM performance, usage metrics, and operational health
- Security & Compliance: Familiar with implementing access control, data privacy, and ethical AI guidelines in cloud-based AI systems
Discover your purpose at work
We’re not employees, we’re Benevity-ites. From all locations, backgrounds and walks of life, who deserve more …
Innovative work. Growth opportunities. Caring co-workers. And a chance to do work that fills us with a sense of purpose.
If the idea of working on tech that helps people do good in the world lights you up ... If you want a career where you’re valued for who you are and challenged to see who you can become …
It’s time to join Benevity. We’re so excited to meet you.
Where we work
At Benevity, we embrace a flexible hybrid approach to where we work that empowers our people in a way that supports great work, strong relationships, and personal well-being. For those located near one of our offices, while there’s no set requirement for in-office time, we do value the moments when coming together in person helps us build connection and collaboration. Whether it’s for onboarding, project work, or a chance to align and bond as a team, we trust our people to make thoughtful decisions about when showing up in person matters most.
Join a company where DEIB isn’t a buzzword
Diversity, equity, inclusion and belonging are part of Benevity’s DNA. You’ll see the impact of our massive investment in DEIB daily — from our well-supported employee resources groups to the exceptional diversity on our leadership and tech teams.
We know that diverse backgrounds, experiences, skills and passions are what move our business and our people forward, so we're committed to creating a culture of belonging with equal opportunities for everyone to shine.
That starts with a fair and accessible hiring process. If you want to feel seen, heard and celebrated, you belong at Benevity.
Candidates with disabilities who may require accommodations throughout the hiring or assessment process are encouraged to reach out to accommodations@benevity.com.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs AWS Azure BentoML CI/CD CoHere Computer Science Data pipelines Docker Engineering FAISS FastAPI Flask GCP Grafana Java Kafka Kubernetes LangChain LLMOps LLMs Machine Learning MLFlow ML infrastructure MLOps NLP NoSQL OpenAI Pinecone Pipelines Privacy Python PyTorch RAG Research Responsible AI Security SQL TensorFlow Terraform Transformers TypeScript Weaviate Weights & Biases
Perks/benefits: Career development Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.