Software Engineer (Big Data)

Bangalore, Karnataka, IN

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

NetApp

The only enterprise-grade storage service that's embedded into the major public cloud providers, NetApp turns disruption into opportunity with intelligent data infrastructure for any data, any workload, and any environment.

View all jobs at NetApp

Apply now Apply later

Job Summary

As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. 
The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”.

Job Requirements

Design and build our Big Data Platform, and understand scale, performance and fault-tolerance
•    Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. 
•    Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums 
•    Work on technologies related to NoSQL, SQL and in-memory databases
•    Conduct code reviews to ensure code quality, consistency and best practices adherence. 
 
 Technical Skills
•    Big Data hands-on development experience is required.
•    Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. 
•    Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
•    Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) 
•    Experience with one or more of Python/Java/Scala. 
•    Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage.

Education

•  A minimum of 5 years of experience is required. 5-8 years of experience is preferred. 
•  A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Big Data Cassandra Computer Science Data governance Data quality Engineering Java Kafka Kubernetes Machine Learning NoSQL Open Source Pipelines Python R R&D Research Scala Security Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this