MLOps Engineer

Singapore

PatSnap

Patsnap empowers IP and R&D teams with advanced AI to get better answers and make faster decisions. Increase IP productivity by 75% while reducing R&D wastage by 25%.

View all jobs at PatSnap

Apply now Apply later

About the Role:
We are seeking a passionate MLOps Engineer to join our team and drive the deployment, monitoring, and optimization of machine learning models in production. This role will be key in ensuring the reliability, scalability, and efficiency of our ML infrastructure while supporting the development and release of AI-driven solutions. If you have a strong background in cloud technologies, automation, and ML model deployment, this is an excellent opportunity to work on cutting-edge AI applications.

Key Responsibilities

  • Design, build, and maintain scalable ML model deployment pipelines for real-time and batch inference.
  • Manage and optimize cloud-based ML infrastructure, ensuring high availability and cost efficiency.
  • Implement monitoring, logging, and alerting systems for ML models in production to track performance, data drift, and anomalies.
  • Automate model training, evaluation, and deployment processes using CI/CD pipelines.
  • Ensure compliance with MLOps best practices, including model versioning, reproducibility, and governance.
  • Collaborate with data scientists, ML engineers, and software developers to streamline the transition of models from development to production.
  • Optimize model serving infrastructure using Kubernetes, Docker, and serverless technologies.
  • Improve data pipelines for feature engineering, data preprocessing, and real-time data streaming.
  • Research and implement tools for scalable AI development, such as Retrieval-Augmented Generation (RAG) and agent-based applications.

Qualifications

  • Hands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, TFX, SageMaker).
  • Strong expertise in cloud services (AWS, GCP, Azure and other Clouds).
  • Proficiency in containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation).
  • Experience in building CI/CD pipelines for machine learning models.
  • Solid programming skills in Python, Go, or Shell scripting for automation.
  • Familiarity with data versioning and model monitoring tools (DVC, Evidently AI, Prometheus, Grafana).
  • Understanding of feature stores and efficient data management for ML workflows.
  • Strong problem-solving skills with a proactive, self-motivated attitude.
  • Excellent collaboration and communication skills to work in a cross-functional team.
  • Fluent in Mandarin for effective communication within a multilingual team environment.

Why Join Us

  • Work with cutting-edge MLOps and AI deployment technologies in a fast-growing industry.
  • Be part of a dynamic and innovative team focused on AI and cloud solutions.
  • Gain exposure to end-to-end machine learning workflows, from data processing to model deployment.
  • Opportunities for professional growth in cloud computing, automation, and AI infrastructure.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: AWS Azure CI/CD CloudFormation Data management Data pipelines Docker Engineering Feature engineering GCP Grafana Kubeflow Kubernetes Machine Learning MLFlow ML infrastructure ML models MLOps Model deployment Model training Pipelines Python RAG Research SageMaker Shell scripting Streaming Terraform TFX

Perks/benefits: Career development

Region: Asia/Pacific
Country: Singapore

More jobs like this