Principal Engineer, Big Data Platform

Bengaluru, India

Apply now Apply later

Company Description

Sandisk understands how people and businesses consume data and we relentlessly innovate to deliver solutions that enable today’s needs and tomorrow’s next big ideas. With a rich history of groundbreaking innovations in Flash and advanced memory technologies, our solutions have become the beating heart of the digital world we’re living in and that we have the power to shape.

Sandisk meets people and businesses at the intersection of their aspirations and the moment, enabling them to keep moving and pushing possibility forward. We do this through the balance of our powerhouse manufacturing capabilities and our industry-leading portfolio of products that are recognized globally for innovation, performance and quality.

Sandisk has two facilities recognized by the World Economic Forum as part of the Global Lighthouse Network for advanced 4IR innovations. These facilities were also recognized as Sustainability Lighthouses for breakthroughs in efficient operations. With our global reach, we ensure the global supply chain has access to the Flash memory it needs to keep our world moving forward.

Job Description

As a hands-on container and infrastructure engineer, you are responsible for design, implement and support our global hybrid cloud container platform using Kubernetes and Google Cloud Platform, (GCP) Anthos and AWS.

Candidate should have expertise in building virtualization platforms of storage, network, and compute for large scale high availability factory manufacturing type workloads. Proven Experience in setting up continuous integration of source code pipelines using Bitbucket, Jenkins, Terraform, Ansible, etc., is required. Ability to build continuous deployments using Docker, Artifactory, Spinnaker, etc., is required with a strong advocate of DevOps principles. Candidate should be passionate about developing and delivering modern software-as-a-service (SaaS) design principles. This position requires partnering with various Western Digital manufacturing, engineering, and IT teams in understanding factory-critical workloads and designing solutions.

Big data platform, BDP team provides self-service data and application platforms to enable machine learning (ML) capabilities to engineering and data science community. The ideal candidate should be passionate about working with various cloud tools to handle various Service Level Agreements (SLA). Candidate should be versatile to experiment with fail-fast approach to adopt to new technologies and natural troubleshooting capabilities. Communication with internal customers, external vendors and co-workers in a clear and professional manner is expected.

 

Job Responsibilities

  • Work in global team to design, implement and operate our global hybrid cloud container platform (Kubernetes)
  • Define, develop, and maintain customizations/integrations between various Kubernetes OSS tooling  (ingress, helm, operators, observability)
  • Perform application deployment of container applications to Kubernetes environments using CI/CD workflow tooling
  • Manage AWS cloud infrastructure setup for services such as EC2, S3, EKS, AWS Lambda, API gateway etc
  • Document common work tasks to be added to shared knowledge base
  • Work closely with other business development teams to help them design and deploy their applications

 

    Qualifications


    Required Qualifications:

    • BS/MS in Computer Science, Information Technology, Computer Information Systems (or) equivalent working experience in IT field
    • 10+ years of experience in handling enterprise level infrastructure for storage, memory, network, compute, and virtualization using vSphere of VMWare
    • Proven Experience in setting up continuous integration of source code pipelines using Bitbucket, Jenkins, Terraform, Ansible and continuous deployment pipelines using Artifactory, ArgoCD and Spinnaker
    • Proven experience in and deep understanding of the Kubernetes architecture, including the control plane and Kubernetes networking models, including CNI (Container Network Interface) plugins (such as Calico & Flannel), Service Mesh architectures (Istio, Linkerd) and Ingress Controllers. Expertise in resource allocation, scaling using Pods, fine-tune cluster performance, configuring and managing persistent storage in Kubernetes. Strong focus on securing Kubernetes clusters, including implementing best practices for secrets management (using tools like HashiCorp Vault)
    • Proven Experience with end-to-end Observability in Kubernetes environments using monitoring tools such as Prometheus, Grafana, and Logging solutions like Splunk.
    • Strong understanding of network architecture and network virtualization, including bandwidth management, latency troubleshooting, and capacity planning to ensure optimal data flow and resource allocation.
    • Expertise in deploying and managing AWS services like EMR, Redshift, RDS, and scaling AI and ML solutions on platforms like AWS Bedrock and Sagemaker.
    • Candidate should be passionate about developing and delivering modern software-as-a-service (SaaS) design principles using Docker/Kubernetes
    • Hands-on Python and Unix shell scripting is required with a strong advocate of DevOps principles.
    • Strong troubleshooting skills with a strong appetite to learn new technology


    Preferred Qualifications

    • Certification in Kubernetes
    • Proven Experience or Certification in one of major cloud providers such as AWS or GCP
    • Deep Understanding of all AWS or GCP offerings for cloud computing and Gen AI solutions including Bedrock or VertexAI services
    • Deep Understanding of services like EMR, RDS (Aurora DB), Kafka, and Redshift to support large-scale data processing
    • Understanding of MLOps tools for AI & Machine Learning such as Dataiku
    • Deep Familiarity with data service solutions such as ElasticSearch, Kafka, Redis, NiFi

    Additional Information

    Sandisk thrives on the power and potential of diversity. As a global company, we believe the most effective way to embrace the diversity of our customers and communities is to mirror it from within. We believe the fusion of various perspectives results in the best outcomes for our employees, our company, our customers, and the world around us. We are committed to an inclusive environment where every individual can thrive through a sense of belonging, respect and contribution.

    Sandisk is committed to offering opportunities to applicants with disabilities and ensuring all candidates can successfully navigate our careers website and our hiring process. Please contact us at jobs.accommodations@sandisk.com to advise us of your accommodation request. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying.

    Apply now Apply later

    * Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

    Job stats:  0  0  0

    Tags: Ansible APIs Architecture AWS Big Data Bitbucket CI/CD Computer Science DevOps Docker EC2 Elasticsearch Engineering GCP Generative AI Google Cloud Grafana Helm Jenkins Kafka Kubernetes Lambda Machine Learning MLOps NiFi Pipelines Python Redshift SageMaker Shell scripting Splunk Terraform Vertex AI

    Perks/benefits: Career development

    Region: Asia/Pacific
    Country: India

    More jobs like this