Data Scientist (Python)

Bangalore, Karnataka, IN, 560071

NetApp

Turn a world of disruption into opportunity with intelligent data infrastructure from NetApp. Realize seamless flexibility—any data, any workload, any environment—with the only enterprise-grade storage service embedded in the world’s biggest...

View all jobs at NetApp

Apply now Apply later

Job Summary

NetApp’s Active IQ leverages the power of predictive, prescriptive analytics and enables customers and partners to automate data center operations, achieve low TCO and avoid issues before they become reality. Today, 98% of technical issues with NetApp products are automatically identified by Active IQ with prescriptive steps to avoid the issue – and we simply want to do better and more.

As a member of the Active IQ Analytics practice, you will analyze product telemetry data, customer transactions data and build analytical models for solving high value business problems such as – identification of cross-sell/ upsell opportunities, prediction of customer lifecycle events, enhancing storage infrastructure durability and enabling always-on always-available storage ecosystems. Every day, half million active IOT end points feed the Active IQ multi-petabyte data lake with structured and unstructured data. That’s the data pool you will use to enhance NetApp’s world class capabilities.

Job Requirements

  • Experience in Data Science & Statistics/ Mathematics or related field.
  • Proficiency in AI/ML scripting, programming language – Python, R.
  • Strong skills in developing SQL queries/ scripts and managing relational databases.
  • A strong understanding and experience with concepts related to algorithms, data structures, programming practices and software development processes.
  • Knowledge of at least some supervised and unsupervised modeling techniques such as Logistic/Linear Regression, Clustering, Decision Trees, Neural Networks/ Deep Networks etc.
  • Good understanding of big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing.
  • Experience with ML libraries and frameworks: PyTorch/ TensorFlow/ Keras/ Open AI, LangChain etc.
  • Experience with No SQL Document Databases (Mongo DB/ Cassandra/ Cosmos DB, Document DB).
  • Experience working in Linux, AWS/Azure/GCP, Kubernetes – Control plane, Auto scaling, orchestration, containerization.
  • Ability to think analytically, write and edit technical material, and relate statistical concepts and applications to technical and business users.
  • Willingness to extend beyond core data science work to perform -data wrangling, data prep and data transfers, occasionally.
  • Ability to work independently and collaborate with cross-functional teams, as required.
  • Aptitude for learning new technologies and participation in all phases of product development cycle: product definition, design, through implementation, debugging, testing and early customer support.


 

Education

  • Requires a minimum of 3 years of related experience.
  • Bachelor’s degree in engineering/ statistics/ data science (or related). Master’s degree is a plus.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  4  3  0
Category: Data Science Jobs

Tags: AWS Azure Big Data Cassandra Clustering Cosmos DB Engineering GCP Hadoop Keras Kubernetes LangChain Linux Machine Learning Mathematics Python PyTorch R RDBMS Spark SQL Statistics TensorFlow Testing Unstructured data

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this