Principal Performance and Scale Engineer - AI Engineering Tools

Raanana, Israel

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Red Hat

Red Hat is the world’s leading provider of enterprise open source solutions, including high-performing Linux, cloud, container, and Kubernetes technologies.

View all jobs at Red Hat

Apply now Apply later

Job Summary

The Red Hat Performance and Scale Engineering team is looking for a Principal Performance Engineer to join the PSAP (Performance and Scale for AI Platforms) to lead the

“AI Engineering Tools” Performance and Scale efforts within the PSAP team. 

Red Hat’s AI Engineering Tools organization is building an open‑source, end‑to‑end platform for building generative‑AI solutions on RHEL and OpenShift. From high‑volume Data Processing pipelines and Retrieval‑Augmented Generation (RAG) services to Agentic / MCP orchestration and a production‑ready Llama Stack, each scrum team ships a critical layer of our stack. As Principal Performance & Scalability Engineer, you’ll be the technical leader who ensures every layer performs—and scales—flawlessly in the hands of developers and enterprise customers. 

This role needs a seasoned engineer that thinks creatively, adapts to rapid change, and has the willingness to learn and apply new technologies. You will be joining a vibrant open source culture, and helping promote performance and innovation in this Red Hat engineering team. The border mission of the Performance and Scale team is to establish performance and scale leadership of the Red Hat product and cloud services portfolio. The scope includes component level, system and solution analysis and targeted enhancements. The team collaborates with engineering, product management, product marketing and customer support as well as Red Hat’s hardware and software ecosystem partners.

At Red Hat, our commitment to open source innovation extends beyond our products - it’s embedded in how we work and grow. Red Hatters embrace change – especially in our fast-moving technological landscape – and have a strong growth mindset. That's why we encourage our teams to proactively, thoughtfully, and ethically use AI to simplify their workflows, cut complexity, and boost efficiency. This empowers our associates to focus on higher-impact work, creating smart, more innovative solutions that solve our customers' most pressing challenges.

What you will do:

  • Define measurable KPIs / SLOs for throughput, latency, footprint, and cost across all AI Engineering Tools components.

  • Own and iterate on the performance roadmap—from micro‑benchmarks to multi‑cluster scale tests.

  • Champion a “performance‑first” engineering culture

  • Formulate performance test plans and execute performance benchmarks to characterize performance, drive improvements, and detect performance issues through data analysis and visualization

  • Develop and maintain tools, scripts, and automated solutions that streamline performance benchmarking tasks.

  • Work closely with cross-functional engineering teams to identify and address performance issues. For eg.

    • RAG: profile vector DBs (PGVector, Milvus) and embedding models, tune ANN indexes and cache paths.

    • Agentic/MCP: stress‑test agent orchestration graphs, reduce tail latency of multi‑step chains.

    • Llama Stack: Performance and Capacity Measurement

  • Partner with DevOps to bake performance gates into GitHub Actions/OpenShift Pipelines.

  • Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.

  • Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.

  • Publish results, recommendations, and best practices through internal reports, presentations, external blogs, and official documentation.

  • Represent the team at internal and external conferences, presenting key findings and strategies.

What you will bring:

  • 8+ years in performance engineering or systems‑level software

  • Basic understanding of AI and LLMs

  • Fluency in Python (data & ML), strong Bash/Linux skills

  • Exceptional communication skills - able to translate raw performance numbers into customer value and executive narratives

  • Commitment to open‑source values

Nice to Haves:

  • Master’s or PhD in Computer Science, AI, or a related field

  • History of upstream contributions and community leadership

  • Hands‑on expertise with Kubernetes/OpenShift

  • Familiarity with performance observability stacks such as perf/eBPF‑tools, Nsight Systems, PyTorch Profiler, among others

  • Practical experience building agentic GenAI applications with orchestration frameworks such as LangChain, LangGraph, MCP

#LI-OA1

About Red Hat

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.

Inclusion at Red Hat
Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.

Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.


Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.


Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: ANN Computer Science Data analysis DevOps Engineering Generative AI GitHub KPIs Kubernetes LangChain Linux LLaMA LLMs Machine Learning Open Source PhD Pipelines Python PyTorch RAG Scrum

Perks/benefits: Career development Conferences Transparency

Regions: Remote/Anywhere Middle East
Country: Israel

More jobs like this