Research Engineer, ML Systems (All Industry Levels)
Menlo Park or New York City
Character.AI
Meet AIs that feel alive. Chat with anyone, anywhere, anytime. Experience the power of super-intelligent chat bots that hear you, understand you, and remember you.Joining us as a Research Engineer on the ML Systems team, you’ll be working on cutting-edge ML training and inference systems, optimizing the performance and efficiency of our GPU clusters, and developing new technologies that fine-tune leading consumer AI models with a data flywheel, and serve 20K+ QPS in production with LLMs. Your work will directly contribute to our groundbreaking advancements in AI, helping shape an era where technology is not just a tool, but a companion in our daily lives. At Character.AI, your talent, creativity, and expertise will not just be valued—they will be the catalyst for change in an AI-driven future.
About the role
The ML Systems team is responsible for the research and deployment of systems that efficiently utilize GPU for AI-enabled products.
As a research engineer, you will work across teams and our technical stack to improve our training performance and inference runtime. You will get to shape the conversational experience of millions of users per day.
Example projects:
Write efficient Triton kernels and tune them for our specific models and hardware
Develop prefix-aware routing algorithms to improve serving cache hit rate
Train and distill LLMs to improve latency while preserving accuracy and engagements
Build an efficient and scalable distributed RLHF stack powering the model innovations
Develop systems for efficient multimodal (image gen/video gen) model training & inference
Job Requirements
"All Industry Levels": at least PhD (or equivalent) research experience
Write clear and clean production system code
Strong understanding of modern machine learning techniques (reinforcement learning, transformers, etc)
Track record of exceptional research or creative ML systems projects
Comfortable writing model development code (PyTorch) for either training or inference
Nice to Have
Experience training large models in a distributed setting utilizing PyTorch distributed, DeepSpeed, Megatron.
Experience working with GPUs & collectives (training, serving, debugging) and writing kernels (Triton, CUDA, CUTLASS).
Experience with LLM inference systems and literature such as vLLM and FlashAttention.
Familiarity with ML deployment and orchestration (Kubernetes, Docker, cloud)
Publications in relevant academic journals or conferences in the field of machine learning and systems
About Character.AI
Founded in 2021, Character is a leading AI company offering personalized experiences through customizable AI 'Characters.' As one of the most widely used AI platforms worldwide, Character enables users to interact with AI tailored to their unique needs and preferences.
In just two years, we achieved unicorn status and were named Google Play's AI App of the Year – a testament to our groundbreaking technology and vision.
Ready to shape the future of Consumer AI? 🚀
At Character, we value diversity and welcome applicants from all backgrounds. As an equal opportunity employer, we firmly uphold a non-discrimination policy based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability. Your unique perspectives are vital to our success.
Tags: CUDA Docker GPU Kubernetes LLMs Machine Learning ML models Model training PhD PyTorch Reinforcement Learning Research RLHF Transformers vLLM
Perks/benefits: Conferences
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.