AI Runtime Engineer
United States, Canada, Europe
EnCharge AI
AI Compute from the Edge-to-Cloud for Every Business. Transformative technology for AI computation, breaking records in efficiency and sustainability to enable state-of-the-art models uninhibited by power, space, and cost constraints.EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking an AI Runtime Engineer to develop and optimize the execution stack for our next-generation AI accelerator. In this role, you will work on low-latency, high-performance runtime software that enables efficient execution of deep learning models on specialized hardware. You will collaborate with hardware, compiler, and AI framework teams to deliver optimized AI inference and training performance across cloud and edge environments.
Responsibilities
-
Develop and optimize the AI runtime software stack for executing deep learning workloads on AI accelerators.
-
Implement task scheduling, memory management, and kernel execution strategies for efficient computation.
-
Optimize data movement between host and device using PCIe, DMA, shared memory.
-
Design and implement high-performance APIs for AI Inference frameworks such as OpenVino, ONNX Runtime, vLLM
-
Work on graph execution optimizations, including kernel fusion, pipelining, tensor tiling, and caching.
-
Integrate runtime components with AI compilers (LLVM, MLIR, XLA, TVM) for optimized execution.
-
Ensure scalability and reliability of the AI runtime for cloud-based and edge AI deployments.
Qualifications
-
Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field.
-
3+ years of experience in developing low-level runtime software for AI accelerators, GPUs, or HPC systems.
-
Strong proficiency in C/C++ and low-level systems programming.
-
Deep understanding of task scheduling, concurrency, and memory hierarchy.
-
Experience with hardware-aware optimizations and dataflow architectures.
-
Familiarity with deep learning execution frameworks (ONNX Runtime, TensorRT, TVM, OpenVINO).
-
Experience with low-latency, high-throughput workload execution for AI models.
-
Strong debugging and profiling skills for optimizing AI execution performance.
-
Exposure to AI model deployment pipelines (Triton, TensorFlow Serving).
EnchargeAI is an equal employment opportunity employer in the United States.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture Computer Science Dataflow Deep Learning Engineering HPC Model deployment ONNX Pipelines TensorFlow TensorRT vLLM
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.