LLM/MLOps Engineer, IT

Digital Hub, SG

ST Engineering

At ST Engineering, we harness technology and innovation to enable a more secure and sustainable world. Discover our innovations for smart cities, defence and security.

View all jobs at ST Engineering

Apply now Apply later

JOB RESPONSIBILITIES

As an LLM/MLOps Engineer, you will be a key player in enabling the operational efficiency of advanced machine learning and LLM workflows. This includes setting up and managing on-premise deployments for open-source LLMs and supporting multi-modal capabilities to facilitate applications beyond text, such as vision and speech. You’ll work closely with data scientists, ML engineers, and other teams to ensure smooth model deployment, robust data readiness, and consistent monitoring and optimization.

 

On-Premise Open-Source LLM Hosting and Deployment:

  • Lead the deployment and management of open-source models like Llama, Mistral, and others in on-premise environments.
  • Design and implement infrastructure that supports large-scale LLM hosting, ensuring security, scalability, and resource optimization.
  • Develop best practices for the deployment and lifecycle management of open-source LLMs, from testing to production monitoring.

 

Multi-Modal LLM Operations:

  • Manage multi-modal LLMs that process various data types (text, image, audio) to support diverse AI applications.
  • Collaborate with NLP and data science teams to ensure multi-modal models are trained, optimized, and deployed for cross-functional use cases.
  • Implement strategies for efficient data handling, storage, and processing of multi-modal inputs and outputs.

 

Data Preparation for AI Models:

  • Coordinate with data engineering teams to ensure data quality and readiness for ML and LLM use cases, specifically for open-source and multi-modal models.
  • Perform and oversee data cleaning, transformation, and augmentation processes for optimized training and inference.
  • Implement data versioning and lineage strategies to support model reproducibility and compliance.

 

MLOps Pipeline Development and Management:

  • Develop, deploy, and monitor machine learning and LLM pipelines for training, testing, and production.
  • Automate workflows to streamline the development and deployment of ML and LLM models, with a focus on reducing latency and computational overhead.
  • Set up CI/CD pipelines for continuous integration and delivery of ML models.

 

Model Monitoring, Optimization, and Maintenance:

  • Establish monitoring and alerting frameworks for model performance and drift, particularly for on-premise open-source models.
  • Implement automated retraining and update processes to maintain model accuracy and relevance.
  • Collaborate with ML engineers to tune models, balancing performance with resource efficiency, especially in multi-modal applications. 

 

Collaboration and Documentation:

  1. Work closely with cross-functional teams, including data scientists, engineers, and product managers, to align on project requirements and deliver optimized AI solutions.
  2. Document all MLOps workflows, procedures, and guidelines for hosting and managing open-source and multi-modal models, enhancing team knowledge sharing and onboarding.

 

Collaborate Across Teams: Work closely with various departments to ensure seamless integration and effective deployment of AI solutions.

 

Leadership and Teamwork: Exhibit strong leadership qualities and work collaboratively with the team to achieve project goals.

 

Solution Selling: Effectively communicate and advocate for the adoption of AI solutions within the organization.

 

Continuous Learning: Stay abreast of the latest developments in Generative AI and related fields.

 

 

Job Requirements:

  • 5-10 years of experience in MLOps, Data Engineering, or similar roles with a focus on ML model deployment and operationalization.
  • Strong knowledge of MLOps, LLM deployment, and open-source model hosting, with hands-on experience in models like Llama and Mistral.
  • Expertise in multi-modal LLMs and handling diverse data types (text, image, audio).
  • Proficiency in Python, SQL, and ML/LLM frameworks (e.g., Hugging Face Transformers, TensorFlow, PyTorch).
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) as well as on-premise infrastructure.
  • Experience with CI/CD tools (Azure DevOps, GitLab, or similar).
  • Strong problem-solving skills and the ability to adapt to a fast-paced, dynamic environment.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  4  0  0

Tags: AWS Azure CI/CD Data quality DevOps Docker Engineering GCP Generative AI GitLab Kubernetes LLaMA LLMs Machine Learning ML models MLOps Model deployment NLP Open Source Pipelines Python PyTorch Security SQL TensorFlow Testing Transformers

Perks/benefits: Career development

Region: Asia/Pacific
Country: Singapore

More jobs like this