ML Model Evaluation Engineer
London, UK
About Us
Symbolica is an AI research lab pioneering the application of category theory to enable logical reasoning in machines.
We’re a well-resourced, nimble team of experts on a mission to bridge the gap between theoretical mathematics and cutting-edge technologies, creating symbolic reasoning models that think like humans – precise, logical, and interpretable. While others focus on scaling data-hungry neural networks, we’re building AI that understands the structures of thought, not just patterns in data.
Our approach combines rigorous research with fast-paced, results-driven execution. We’re reimagining the very foundations of intelligence while simultaneously developing product-focused machine learning models in a tight feedback loop, where research fuels application.
Founded in 2022, we’ve raised over $30M from leading Silicon Valley investors, including Khosla Ventures, General Catalyst, Abstract Ventures, and Day One Ventures, to push the boundaries of applying formal mathematics and logic to machine learning.
Our vision is to create AI systems that transform industries, empowering machines to solve humanity’s most complex challenges with precision and insight. Join us to redefine the future of AI by turning groundbreaking ideas into reality.
About the Role
As an ML Model Evaluation Engineer, you’ll play a critical role in helping us measure progress, design rigorous experiments, and surface meaningful signals as we build models with structured reasoning capabilities. You’ll work alongside researchers and ML engineers to design benchmarks, run large-scale evaluations, and analyse model behaviour — ensuring we’re focused on real-world performance, not just proxy metrics.
This is a role for someone who thrives on experimentation, iteration, and tight feedback loops — someone who loves discovering what works (and what doesn’t) and can design systems to test hypotheses at scale.
📍 This is an onsite role based in our London office (66 City Rd).
Your Focus
- Design and implement robust experiments to evaluate specific model capabilities
- Build and maintain high-frequency evaluation pipelines using PyTorch or JAX
- Engineer benchmark datasets — collecting, filtering, and decontaminating data for meaningful evals
- Create evaluation protocols that measure the right capabilities and avoid metric gaming
- Research and implement strong baselines from literature or current frontier models
- Scale experiments and data analysis to match the demands of large model training runs
- Analyse outputs from eval runs, identify bottlenecks, and present findings clearly to the team
- Collaborate with researchers and engineers to refine evaluation design and keep feedback loops tight
- Contribute to the development of a general-purpose evaluation suite integrated into infra and tooling
About You
- Proficient hands-on experience in machine learning, ideally with a focus on experimental design or evaluation
- Strong engineering skills in Python and PyTorch (or JAX)
- Deep understanding of training and evaluating large-scale deep learning models
- A scientific mindset — you know how to design a clean experiment and what makes a result trustworthy
- Comfortable building infrastructure for benchmark automation and eval pipelines
- Excellent analytical and data-mining skills; comfortable summarising experimental insights to inform team direction
- Familiarity with recent literature and capability evaluations in the frontier AI space
- Collaborative and thoughtful communicator — excited to work closely with both researchers and engineers
- Bonus: experience building benchmark suites, red-teaming evals, or integrating eval infra into full-stack ML pipelines
What We Offer
- Competitive salary and early-stage equity package
- High trust, low bureaucracy environment focused on real impact
- Opportunity to build foundational research tools and shape model development direction
- Work closely with top-notch researchers and ML engineers pushing the edge of machine reasoning
We are able to sponsor a Skilled Worker visa for qualified candidates applying to this position. This specific role exceeds the minimum salary threshold set by the UK government for Skilled Worker visa sponsorship. Please note that English language proficiency at B2 level or higher is required for this role.
Symbolica is an equal opportunities employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, regardless of race, gender, age, religion, disability, or sexual orientation.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Data analysis Deep Learning Engineering JAX Machine Learning Mathematics ML models Model training Pipelines Python PyTorch Research
Perks/benefits: Career development Competitive pay Equity / stock options Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.