AI Platform Engineer

Bengaluru, IN

Apply now Apply later

About SKF

SKF started its operations in India in 1923. Today, SKF provides industry leading automotive and industrial engineered solutions through its five technology-centric platforms: bearings and units, seals, mechatronics, lubrication solutions and services. Over the years the company has evolved from being a pioneer ball bearing manufacturing company to a knowledge-driven engineering company helping customers achieve sustainable and competitive business excellence.

 

SKF's solutions provide sustainable ways for companies across the automotive and industrial sectors to achieve breakthroughs in friction reduction, energy efficiency, and equipment longevity and reliability. With a strong commitment to research-based innovation, SKF India offers customized value-added solutions that integrate all its five technology platforms.

To know more, please visit: www.skf.com/in

About Technology Development

The TD team for ISEA is focused on Customer Product development & Engineering, Innovation for the region,   rollout of new technologies for the region, testing, Failure Investigation, scaleup from POC to series production, Portfolio Management etc.

 

TD Competencies
  • Engineering & Research Centre
  • Product Development & Engineering – This division brings out conceptual/detailed designs to support BOH/ETO activities based on customer specifications. Technology trends like digitization of workflows, e- Aviation, Sensorization, Product localization, Design automation, Agile, DFX and Model based designs, (MBDs) have increased the operational efficiency and application productivity. What our customer gain from this is efficient digital data exchange, traceability and flexibility in design changes, reduced carbon footprint and higher performance products.
  • Testing: Group Testing Services is a trusted partner in design, process and supplier validation. The testing team ensures greater focus on customer requirements, quality and operational efficiency. This entails greater support for SKF’s processes in a faster manner by applying global test standards, adapted to local customer specific requirements.
  • Global Metallurgy & Chemistry Laboratory (GMC)
  • Future Factory (Manufacturing 4.0) – Working on World class manufacturing – Lean, Green, Digital.
  • Manufacturing Process & Development - We support factories in the areas of process development (Heat treatment), machine building, Advanced Manufacturing – HT simulations, Additive Manufacturing, Vision Inspection etc. We are working on building innovative solutions on machines (measurement/ assembly/ clean manufacturing) and focusing on Scaling technologies like 3D printing and Camera based Inspection system with automation.
  • Connected Technologies- develop new products for connectivity and sustain it. We work on sensor technology and data integration. This help customer for predictive maintenance of their assets.
SKF Purpose Statement

Together, we re-imagine rotation for a better tomorrow.

By creating intelligent and clean solutions for people and the planet

 

 

 

 

JOB DESCRIPTION

 

Job Title:        AI Platform Engineer

Reports To:    Manager AI

Role Type:      Individual Contributor

Location:       Bangalore

Role Purpose: As an AI Platform Engineer in our AI Center of Excellence, you will design and implement the technical foundation that supports our AI solutions at scale. You will be instrumental in creating robust, secure, and streamlined infrastructure and workflows that enable the rapid development, deployment, and monitoring of AI-driven applications. Working in a cross-functional environment, you will ensure our AI initiatives have the stability, scalability, and performance they need to deliver substantial business impact.

This role is a key position in our new AI Center of Excellence, providing the backbone for AI platforms across the organization. The AI CoE operates within Technology Development but serves globally, helping various units advance their AI maturity and innovation.

 

Key responsibilities and day to day tasks
 

  • Cloud Infrastructure & IaC
    • Design and manage cloud environments (Azure, Databricks, Microsoft Fabric, Snowflake) using infrastructure-as-code practices to ensure consistency, reliability, and scalability.
    • Set up and maintain Azure services (Azure Data Factory, Azure Functions, etc.) to support end-to-end AI workflows.
  • MLOps & Best Practices
    • Develop and implement MLOps pipelines that handle model training, deployment, and monitoring at scale.
    • Collaborate with data scientists to streamline experimentation, ensure reproducibility, and enable continuous improvement.
  • Data Architecture & Integration
    • Establish robust data pipelines and architectures in collaboration with data engineering teams.
    • Integrate and manage diverse data sources, including Databricks, Snowflake, and Microsoft Fabric, to ensure efficient data flow for AI applications.
  • Automation & Monitoring
    • Implement CI/CD processes using tools like GitHub Actions to automate deployments and reduce operational overhead.
    • Set up comprehensive monitoring and alerting for AI services to guarantee reliability, security, and performance in production.
  • Cross-Functional Collaboration
    • Work closely with AI Engineers, software developers, DevOps, and data architects to deliver stable, scalable AI solutions.
    • Share best practices with teams across the organization to foster a culture of high-quality AI development.

We Expect You to Have

  • Proven experience in Databricks for large-scale data processing and machine learning tasks (must have).
  • Proven experience in Azure for cloud-based AI solutions (must have).
  • AWS experience is a plus but not mandatory.
  • Hands-on MLOps expertise, including setting up ML pipelines, automated model deployment, and monitoring.
  • Proficiency in scripting and programming, using languages like Python, Bash, or PowerShell to automate tasks.
  • Experience with CI/CD pipelines and version control (GitHub, GitHub Actions) for automated testing and deployments.
  • Excellent communication skills to effectively collaborate with cross-functional teams and translate complex infrastructure needs into actionable solutions.

 

Experience with Microsoft Fabric, Bedrock, or other next-generation data and AI platforms is a plus.

 

Candidate Profile:
  • Education: Bachelor’s degree in computer science, engineering, or a related field (or equivalent work experience)
  • Experience: Overall, 7+ years of experience with a minimum of 5 years in Data Architecture and MLOPS
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Agile Architecture AWS Azure Chemistry CI/CD Computer Science Databricks Data pipelines DevOps Engineering GitHub Industrial Machine Learning MLOps Model deployment Model training Pipelines Predictive Maintenance Python Research Security Snowflake Testing

Perks/benefits: Career development Team events

Region: Asia/Pacific
Country: India

More jobs like this