Lead Research Scientist

London - Hybrid

Faculty

We help our clients use cutting edge AI to improve the performance of their business.

View all jobs at Faculty

Apply now Apply later

About Faculty


At Faculty, we transform organisational performance through safe, impactful and human-centric AI.

With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme.

Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI.

Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.


About the role

As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty’s commercial ambition to build safe AI systems.

This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world.


What you’ll be doing

Your role will evolve alongside business needs, but you can expect your key responsibilities to include:

Research Leadership:

  • Lead the AI safety team’s research agenda, setting priorities and ensuring alignment with Faculty’s long-term goals.

  • Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems.

  • Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI).

  • Support Faculty’s positioning as a leader in AI safety through thought leadership and stakeholder engagement.

Research Agenda Development:

  • Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities.

  • Interface with the wider business to ensure alignment between the R&D team’s research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space.

Team Management and Mentorship

  • Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics

  • Provide mentorship and technical guidance to researchers across diverse AI safety topics.

Technical Contributions:

  • Lead hands-on contributions to technical research. 

  • Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners.




Who we are looking for

  • A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs).

  • Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability)

  • Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation.

  • Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.)

  • The ability to conduct and oversee complex technical research projects.

  • A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. 

  • Excellent verbal and written communication skills.


The following would be a bonus, but are by no means required:

  • Commercial experience applying AI safety principles in practical or high-stakes contexts.

  • Background in red-teaming, evaluations, or safety testing for government or industry applications.

  • We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary

What we can offer you:

The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.

Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.

Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: ICLR ICML LLMs Machine Learning NeurIPS NLP NumPy Pandas PhD Python R R&D Research Scikit-learn STEM Testing

Perks/benefits: Career development Conferences

Region: Europe
Country: United Kingdom

More jobs like this