ML Systems Engineer - Research
Jerusalem
Lightricks
Bridge the gap between imagination and creation with the best tools and services for content: Facetune, Videoleap, Photoleap & LTX Studio. Check it out now!Who we are
Lightricks, an AI-first company, is revolutionizing how visual content is created. With a mission to bridge the gap between imagination and creation, Lightricks is dedicated to bringing cutting-edge technology to the creative and business spaces.
Our advanced AI photo and video generation models, including our open-source LTXV model, power our apps and platforms, including Facetune, Photoleap, Videoleap, and LTX Studio, allowing creators and brands to leverage the latest research breakthroughs, offering endless control over their creative potential. Our influencer marketing platform, Popular Pays, provides creators the ability to monetize their work and offers brands opportunities to scale their content through tailored creator partnerships.
What you will be doing
As an ML Systems Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of Lightricks’ machine learning inference systems. You’ll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. You’ll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering Lightricks' next-generation AI products.
This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.
Responsibilities
- Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
- Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
- Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
- Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
- Contribute to ComfyUI and internal infrastructure to improve usability and performance of model execution flows
- Investigate performance bottlenecks at all levels of the stack—from Python to kernel-level execution
- Navigate and enhance a large, complex, production-grade codebase
- Drive innovation in low-level system design to support future ML workloads
Your Skills and Experience
- 5+ years of experience in high-performance software engineering
- Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
- Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
- Proven track record of optimizing compute-intensive systems
- Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
- Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
- Collaborative and team-oriented mindset, with experience working across functional teams
Preferred Requirements
- Experience with low-level profiling and debugging tools (e.g., Nsight, perf, gdb, VTune)
- Familiarity with machine learning frameworks (e.g., PyTorch, TensorRT, ONNX Runtime)
- Contributions to performance-critical open-source or ML infrastructure projects
- Experience with cloud infrastructure and GPU scheduling at scale
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture ComfyUI CUDA Engineering GPU Machine Learning ML infrastructure ONNX Open Source Pipelines Python PyTorch Research TensorRT
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.