Machine Learning Ops Engineer - AI
London, England, United Kingdom
Opus 2
Leading legal software and services for litigation & arbitration teams, arbitral institutions, barristersâ chambers, & more. Discover Opus 2.As Opus 2 continues to embed AI into our platform, we need robust, scalable data systems that power intelligent workflows and support advanced model behaviours. Weâre looking for an MLOps Engineer to build and maintain the infrastructure that powers our AI systems. You will be the bridge between our data science and engineering teams, ensuring that our machine learning models are deployed, monitored, and scaled efficiently and reliably. Youâll be responsible for the entire lifecycle of our ML models in production, from building automated deployment pipelines to ensuring their performance and stability. This role is ideal for a hands-on engineer who is passionate about building robust, scalable, and automated systems for machine learning, particularly for cutting-edge LLM-powered applications.
What you'll be doing
- Design, build, and maintain our MLOps infrastructure, establishing best practices for CI/CD for machine learning, including model testing, versioning, and deployment.
- Develop and manage scalable and automated pipelines for training, evaluating, and deploying machine learning models, with a specific focus on LLM-based systems.
- Implement robust monitoring and logging for models in production to track performance, drift, and data quality, ensuring system reliability and uptime.
- Collaborate with Data Scientists to containerize and productionize models and algorithms, including those involving RAG and Graph RAG approaches.
- Manage and optimize our cloud infrastructure for ML workloads on platforms like Amazon Bedrock or similar, focusing on performance, cost-effectiveness, and scalability.
- Automate the provisioning of ML infrastructure using Infrastructure as Code (IaC) principles and tools.
- Work closely with product and engineering teams to integrate ML models into our production environment and ensure seamless operation within the broader product architecture.
- Own the operational aspects of the AI lifecycle, from model deployment and A/B testing to incident response and continuous improvement of production systems.
- Contribute to our AI strategy and roadmap by providing expertise on the operational feasibility and scalability of proposed AI features.
- Collaborate closely with Principal Data Scientists and Principal Engineers to ensure that the MLOps framework supports the full scope of AI workflows and model interaction layers.
What excites us?
Weâve moved past experimentation. We have live AI features and a strong pipeline of customers excited to get access to more improved AI-powered workflows. Our focus is on delivering real, valuable AI-powered features to customers and doing it responsibly. Youâll be part of a team that owns the entire lifecycle of these systems, and your role is critical to ensuring they are not just innovative, but also stable, scalable, and performant in the hands of our users.
Requirements
What we're looking for in you
- You are a practical and automation-driven engineer. You think in terms of reliability, scalability, and efficiency.
- You have hands-on experience building and managing CI/CD pipelines for machine learning.
- You're comfortable writing production-quality code, reviewing PR's, and are dedicated to delivering a reliable and observable production environment.
- You are passionate about MLOps and have a proven track record of implementing MLOps best practices in a production setting.
- Youâre curious about the unique operational challenges of LLMs and want to build robust systems to support them.
Qualifications
- Experience with model lifecycle management and experiment tracking.
- Ability to reason about and implement infrastructure for complex AI systems, including those leveraging vector stores and graph databases.
- Proven ability to ensure the performance and reliability of systems over time.
- 3+ years of experience in an MLOps, DevOps, or Software Engineering role with a focus on machine learning infrastructure.
- Proficiency in Python, with experience in building and maintaining infrastructure and automation, not just analyses.
- Experience working in Java or TypeScript environments is beneficial.
- Deep experience with at least one major cloud provider (AWS, GCP, Azure) and their ML services (e.g., SageMaker, Vertex AI). Experience with Amazon Bedrock is a significant plus.
- Strong familiarity with containerization (Docker) and orchestration (Kubernetes).
- Experience with Infrastructure as Code (e.g., Terraform, CloudFormation).
- Experience in deploying and managing LLM-powered features in production environments.
- Bonus: experience with monitoring tools (e.g., Prometheus, Grafana), agent orchestration, or legaltech domain knowledge.
Benefits
Working for Opus 2
Opus 2 is a global leader in legal software and services, trusted partner of the worldâs leading legal teams. All our achievements are underpinned by our unique culture where our people are our most valuable asset. Working at Opus 2, youâll receive:
- Contributory pension plan.
- 26 days annual holidays, hybrid working, and length of service entitlement.
- Health Insurance.
- Loyalty Share Scheme.
- Enhanced Maternity and Paternity.
- Employee Assistance Programme.
- Electric Vehicle Salary Sacrifice.
- Cycle to Work Scheme.
- Calm and Mindfulness sessions.
- A day of leave to volunteer for charity or dependent cover.
- Accessible and modern office space and regular company social events.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index đ°
Tags: A/B testing AI strategy Architecture AWS Azure CI/CD CloudFormation Data quality DevOps Docker Engineering GCP Grafana Java Kubernetes LLMs Machine Learning ML infrastructure ML models MLOps Model deployment Pipelines Python RAG SageMaker Terraform Testing TypeScript Vertex AI
Perks/benefits: Career development Health care Parental leave Salary bonus Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.