Cloud Native Engineer, ARK Large Model Platform

Singapore, Singapore

Apply now Apply later

TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok's sponsorship of a visa.

TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. TikTok has global offices including Los Angeles, New York, London, Paris, Berlin, Dubai, Singapore, Jakarta, Seoul and Tokyo.

Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.

About the Team
The Applied Machine Learning (AML) - Enterprise team provides machine learning platform products on VolcanoEngine with cloud native resource scheduling system which intelligently orchestrates different tasks and jobs with minimised costs of every experiment and maximised resource utilisation, rich modelling tools including customised machine learning tasks and web IDE, and multi-framework high performance model inference services.

In 2021, through VolcanoEngine, we released this machine learning infrastructure to the public, to provide more enterprises with reduced costs of computation power, lower barriers to machine learning engineering and deeper developments in AI capabilities.

Responsibilities
Responsible for Ark Large Model Platform development on Volcano Engine, researching systematic solutions on large model solution implementations and applications in various industries, striving to reduce the IT cost of large model applications, meeting the users' ever-growing demand for intelligent interaction and improving the lifestyle and communications of users in the future world.

- Maintain a large-scale AI cluster and develop state-of-the-art machine learning platforms to support a diverse group of stakeholders.
- Tackle extremely challenging tasks which include, but are not limited to, delivering highly efficient training and inference for large language models, managing extremely effective distributed training jobs across clusters with over 10,000 nodes and GPU chips, and constructing highly reliable ML systems with unparalleled scalability.
- The work encompasses various aspects of LLMOps (Large Language Model Operations), such as resource scheduling, task orchestration, model training, model inference, model management, dataset management, and workflow orchestration.
- Investigate cutting-edge technologies related to large language models, AI, and machine learning at large, such as state-of-the-art distributed training systems with heterogeneous hardware, GPU utilization optimization, and the latest in hardware architecture.
- Employ a variety of technological and mathematical analyses to enhance cluster efficiency and performance.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture Engineering GPU LLMOps LLMs Machine Learning ML infrastructure Model inference Model training

Region: Asia/Pacific
Country: Singapore

More jobs like this