Solutions Architect, Retrieval Augmented Generation

UK, Remote

NVIDIA

NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.

View all jobs at NVIDIA

Apply now Apply later

NVIDIA’s Worldwide Field Operations (WWFO) team is looking for a Data Science focused Solution Architect with an expertise in AI system architecture and applied Machine Learning. In particular, a candidate with deep understanding of modern AI infrastructure, familiarity with Generative AI / Large Language Models (LLM) / Information Retrieval and understanding of how to optimize them (using model compression / distillation) and apply jointly (using technologies like TRT-LLM, Triton and Kubernetes) in Retrieval Augmented Generation (RAG) workflows. In our Solutions Architecture team, we work with the most exciting computing hardware and software, driving the latest breakthroughs in artificial intelligence! We need individuals who can enable customer adoption of NVIDIA technology and develop lasting relationships with our technology partners, making NVIDIA an integral part of end-user solutions. We are looking for someone always thinks about artificial intelligence, someone who can thrive in a fast paced, rapidly developing field, someone able to coordinate efforts between customers, corporate marketing, industry business development and engineering.

A successful candidate will be working with ground breaking LLM models that are fundamentally changing the way people use technology! You will be the first line of technical expertise between NVIDIA and our customers. Your duties will vary from working on proof-of-concept demonstrations, to driving relationships with key executives and managers in order to promote adoption of RAG pipelines streamline their deployment to production. Dynamically engaging with developers, scientific researchers, data scientists, IT managers and senior leaders is a significant part of the Solutions Architect role and will give you experience with a range of partners and technologies.

What You’ll Be Doing:

  • Work directly with key customers to understand their technology and provide the best solutions.

  • Develop and demonstrate solutions based on NVIDIA’s and open source LLM technology.

  • Perform in-depth analysis and optimization of RAG pipeline components to ensure the best performance on GPU systems.

  • Partner with Engineering, Product and Sales teams to develop, plan best suitable solutions for customers. Enable development and growth of product features through customer feedback and proof-of-concept evaluations

  • Build industry expertise and become a contributor in integrating NVIDIA technology into Enterprise Computing architectures.

What We Need to See:

  • MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering fields

  • Excellent verbal, written communication, and technical presentation skills in English

  • 6+ years' work or research experience with Python/ C++ / other software development

  • Academic and/or experience in fields related to machine learning, deep learning and/or data science.

  • Work experience deploying and maintaining AI based systems and knowledge of modern DevOps / MLOps tools and standards.

  • Understanding of key libraries used for LLM and RAG development: for NLP models development (e.g. NeMo, DeepSpeed, HuggingFace), for deployment (e.g. TensorRT-LLM, Triton Inference Server) for Information Retrieval (e.g. RAPIDS, Milvus, Pinecone, Elastic Search).

  • You are excited to work with multiple levels and teams across organizations (Engineering, Product, Sales and Marketing team) and Capable of working in a constantly evolving environment without losing focus.

  • Ability to multitask in a fast-paced environment and Driven with strong analytical and problem-solving skills.

  • Strong time-management and organization skills for coordinating multiple initiatives, priorities and implementations of new technology and products into very sophisticated projects

  • You are a self-starter with demeanor for growth, passion for continuous learning and sharing findings across the team

Ways to Stand Out from The Crowd:

  • Experience working with larger transformer-based architectures for NLP, CV, ASR or other.

  • Experience optimizing DNN architecture using tools such as TRT/TRT-LLM or model compression.

  • Understanding of AI/HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Architecture Jobs

Tags: Architecture ASR Computer Science Deep Learning DevOps Engineering Generative AI GPU HPC HuggingFace InfiniBand Kubernetes LLMs Machine Learning Mathematics ML infrastructure MLOps NLP Open Source PhD Physics Pinecone Pipelines Python RAG Research TensorRT

Perks/benefits: Career development Startup environment

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this