Senior ML Ops Engineer

Bangalore, India

Calix

Calix is a leading provider of cloud and software platforms, systems, and services for internet service providers. Partner with Calix and grow your business.

View all jobs at Calix

Apply now Apply later

Calix is seeking a highly skilled Senior ML Ops Engineer to join our cutting-edge AI/ML team. In this role, you will be responsible for building, scaling, and maintaining the infrastructure that powers our machine learning and generative AI applications. You will work closely with data scientists, ML engineers, and software developers to ensure our ML/AI systems are robust, efficient, and production ready.

Key Responsibilities:

  • Design, implement, and maintain scalable infrastructure for ML and GenAI applications.
  • Deploy, operate, and troubleshoot production ML pipelines and generative AI services.
  • Build and optimize CI/CD pipelines for ML model deployment and serving.
  • Scale compute resources across CPU/GPU/TPU/NPU architectures to meet performance requirements.
  • Implement container orchestration with Kubernetes for ML workloads.
  • Architect and optimize cloud resources on GCP for ML training and inference.
  • Setup and maintain runtime frameworks and job management systems (Airflow, KubeFlow, MLflow).
  • Establish monitoring, logging and alerting for ML systems observability.
  • Collaborate with data scientists and ML engineers to translate models into production systems.
  • Optimize system performance and resource utilization for cost efficiency.
  • Develop and enforce MLOps best practices across the organization.

Qualifications:

  • Bachelor's degree in computer science, Information Technology, or a related field (or equivalent experience). 
  • 5+ years of overall software engineering experience.
  • 3+ years of focused experience in MLOps or similar ML infrastructure roles.
  • Strong experience with Docker container services and Kubernetes orchestration.
  • Demonstrated expertise in cloud infrastructure management, preferably on GCP (AWS or Azure experience also valued).
  • Proficiency with workflow management and ML runtime frameworks such as Airflow, Kubeflow, and MLflow.
  • Strong CI/CD expertise with experience implementing automated testing and deployment pipelines.
  • Experience with scaling distributed compute architectures utilizing various accelerators (CPU/GPU/TPU/NPU).
  • Solid understanding of system performance optimization techniques.
  • Experience implementing comprehensive observability solutions for complex systems.
  • Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack).
  • Proficient in at least two of the following: Shell Scripting, Python, Go, C/C++
  • Familiarity with ML frameworks such as PyTorch and ML platforms like SageMaker or Vertex AI.
  • Excellent problem-solving skills and ability to work independently
  • Strong communication skills and ability to work effectively in cross-functional teams.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Airflow Architecture AWS Azure CI/CD Computer Science Docker ELK Engineering GCP Generative AI GPU Grafana Kubeflow Kubernetes Machine Learning MLFlow ML infrastructure MLOps Model deployment Pipelines Python PyTorch SageMaker Shell scripting Testing Vertex AI

Region: Asia/Pacific
Country: India

More jobs like this