Software Engineer
Bangalore, Karnataka, IN
NetApp
Turn a world of disruption into opportunity with intelligent data infrastructure from NetApp. Realize seamless flexibility—any data, any workload, any environment—with the only enterprise-grade storage service embedded in the world’s biggest...Job Summary
As a SDE at NetApp India R&D division, you will be responsible for development, Validation, implementation, and Operations of software across Big Data Engineering across both cloud and Onprem. You will be part of a highly skilled technical team named NetApp Active IQ.
Active IQ Platform/Datahub process 10 trillion data points per month with around 25 PBs of data in its data sources. This platform enables advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage and then provides the insights and actions to make it happen. We call this “actionable intelligence” and it leads to higher availability, improved security, and simplified administration
Your focus area will be around Data engineering related projects as a Data Engineer is responsible for development and operations of the microservices in Active IQ’s Big data platform.
This position requires an individual to be creative, team-oriented, technology savvy, driven to produce results and demonstrates the ability to working across teams
Your Responsibility
- Build big data platform and Big Data solutions primarily based on open-source technologies that is fault-tolerant & scalable.
- Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community.
- Deploy and monitor products on both Cloud and Onprem platforms
- Work on technologies related to NoSQL, SQL and InMemory platform(s)
- Develop and implement best-in-class monitoring processes to enable data applications meet SLAs
Our Ideal Candidate
- You have a deep interest and passion for technology.
- You love writing and owning codes and enjoy working with people who will keep challenging you at every stage.
- You have strong problem solving, analytic, decision- making and excellent communication with interpersonal skills.
- You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities.
Education
- 0-2 years of Experience with Java, and Python to write data pipelines and data processing layers.
- Strong in CS fundamentals, Unix shell scripting and Database Concepts
- Good understanding of Data processing pipeline implementation, Kafka, Spark, NOSQL DB's especially MongoDB and SQL
- Familiarity with GenAI, Agile concepts, Continuous Integration and Continuous Delivery
- Working Knowledge in Linux Environment with containers (Docker & Kubernetes) is a plus
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Big Data Data pipelines Docker Engineering Generative AI Java Kafka Kubernetes Linux Machine Learning Microservices MongoDB NoSQL Open Source Pipelines Python R R&D Security Shell scripting Spark SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.