Senior Core AI Engineer

Bangalore, India

Calix

Calix is a leading provider of cloud and software platforms, systems, and services for internet service providers. Partner with Calix and grow your business.

View all jobs at Calix

Apply now Apply later

Calix provides the cloud, software platforms, systems and services required for communications service providers to simplify their businesses, excite their subscribers and grow their value.

Our Products Team is growing and we're looking for an innovative and experienced Senior Core AI Engineer to develop and deploy end-to-end Gen AI applications. In this role, you will work on developing advanced AI solutions from model development to deployment in production environment. You will work on building scalable, high-performance AI systems for various applications, including Natural language processing (NLP), predictive analysis, Knowledge management, etc. Your work will bridge the research and production turning cutting edge AI searches into impactful solutions for real-world problems. You will be a key player in driving the development, working alongside machine learning engineers, AI researchers, and data engineers to bring these models into production.

This position is based in Bangalore, India.

Key Responsibilities:

  • Design and develop production software components.
  • Develop efficient data ingestion, feature engineering and data pipelines at production scale.
  • Collaborate with data engineers to preprocess and manage large datasets, ensuring that data pipelines are efficient and optimized for model training.
  • Automate collection and visualization of data, model, and operational metrics.
  • Implement and manage MLOps pipelines to automate model deployment, monitoring, and maintenance. Deploy models in scalable production environments using cloud platforms like AWS, GCP, or Azure.
  • Work with cross-functional teams, including software engineers and data scientists, to design system architectures that integrate AI models into existing or new platforms.
  • Extend, harden, and scale data processing and ML components.
  • Perform data ingestion, data processing and feature engineering tasks.
  • System integration to bring AI features to other applications and platforms.
  • Build and deploy microservices for AI features.
  • Operate and administration of production DB: SQL, NoSQL, Vector and Graph.
  • Troubleshoot and support production pipeline.
  • Working with ops team for end-to-end deployment of data and ML pipelines.

Qualifications:

  • Bachelor’s, Master’s, Computer Science, or a related field.
  • 5+ years of hands-on experience in AI/ML engineering, building and deploying machine learning models in production environments.
  • Proven track record in developing end-to-end AI applications across different domains, such as NLP, computer vision, or predictive modeling.
  • Solid foundation on data structure and algorithms.
  • Proficient in deep learning frameworks such as TensorFlow, PyTorch, or Keras.
  • Proficiency in Python and one other languages Java, Go, C/C++, R, SQL.
  • Experience with SQL, Pandas and exposure to various SQL and noSQL DB.
  • Solid understanding of data engineering and experience working with large datasets and building ETL pipelines.
  • Experience automating unit, system and production testing.
  • Experience on data processing: ETL, feature engineering, data cleaning.
  • Proficiency developing in Linux environments with git.
  • Experience with cloud platforms (AWS, GCP, Azure) and deploying models in containerized environments using Docker and Kubernetes.
  • Experience developing microservices and REST API.
  • Tools: Linux, git, Jupyter, IDE, ML frameworks: Tensorflow, Pytorch, Keras, Scikit-learn, Kubeflow, MLflow.
  • Good communication skills.

Preferred Skills:

  • Experience with multimodal AI systems (text, image, video).
  • Knowledge of DevOps principles and CI/CD pipelines for automated testing and deployment.
  • Familiarity with natural language understanding (NLU), automatic speech recognition (ASR), and dialog systems.
  • Contributions to open-source AI projects or publications in AI/ML conferences and journals.
  • GenAI: RAG pipeline components, LLM pre-training, alignment, fine tuning, different types of LLM and their applications.

Location:

  • India – (Flexible hybrid work model - work from Bangalore office for 20 days in a quarter)

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture ASR AWS Azure CI/CD Computer Science Computer Vision Data pipelines Deep Learning DevOps Docker Engineering ETL Feature engineering GCP Generative AI Git Java Jupyter Keras Kubeflow Kubernetes Linux LLMs Machine Learning Microservices MLFlow ML models MLOps Model deployment Model training NLP NLU NoSQL Open Source Pandas Pipelines Predictive modeling Python PyTorch R RAG Research REST API Scikit-learn SQL TensorFlow Testing

Perks/benefits: Conferences Flex hours

Region: Asia/Pacific
Country: India

More jobs like this