Sr. Service Delivery Engineer - AWS AI

Kolkata - Ensim

Ingram Micro

Ingram Micro is redefining distribution to maximize value and efficiencies, becoming one of the first in distribution to transform legacy processes.

View all jobs at Ingram Micro

Apply now Apply later

It's fun to work in a company where people truly BELIEVE in what they're doing!

Job Description: 

Ingram Micro touches 80% of the technology you use every day with our focus on Technology Solutions, Cloud, and Commerce and Lifecycle Solutions. With $50 billion in revenue, we have become the world’s largest technology distributor with operations in 64 countries and more than 35,000 associates.

Job Description: AI & Gen AI Sr. ML/Professional Engineer  

Role: AI/Gen AI Sr. ML/Professional Engineer  

Experience:

  • Total IT Experience: 4 to 8 years
  • AI Experience: 3 to 5 years
  • Gen AI Experience: 1 to 2 years

Key Responsibilities:

  • Design and Develop AI Solutions: Develop scalable and reliable AI solutions on AWS or any cloud platform.
    • AI Services Expertise: Proficient in AI services including EC2, Auto Scaling, S3, RDS, DynamoDB, Sagemaker, ML Studio, AI Studio, Vertex AI, IBM Watson.
    • AI Tools Management: Implement and manage AI tools, techniques, and frameworks.
    • Data Handling and Analysis: Expertise in RAG, Knowledge DB (Vector DB), Embedding, Indexing, Knowledge Graph, Cosine Similarity, and Searching.
    • Gen AI Solutions Development: Design, develop, train/tune/transfer learning, validate, deploy, manage/monitor, and optimize Gen AI solutions.
    • Transformer Architecture: Understanding of Transformer Architecture, including GAN, VAE, and VE.
    • LLM Fine-Tuning and Prompt Engineering: Expertise in fine-tuning large language models (LLMs) for specific tasks, using prompt engineering techniques such as few-shot learning and prompt chaining, and optimizing prompts.
    • Multimodal AI Techniques: Knowledge of multimodal AI techniques, including vision-language models (e.g., CLIP, DALL-E), speech-language models, and multimodal fusion techniques.
    • Responsible AI Practices: Understanding of responsible AI practices, techniques for detecting and mitigating bias in LLMs, and strategies for promoting fairness and transparency.
    • LLM Evaluation Metrics: Familiarity with evaluation metrics specific to LLMs, such as perplexity, BLEU score, and metrics for coherence, consistency, and factuality.
    • Production Deployment of LLMs: Experience in deploying LLMs in production environments, including containerization, scaling, and monitoring techniques for LLM models and pipelines. Understanding of Nvidia NIM, Open AI, Huggin Face etc.
    • Security and Privacy: Knowledge of security and privacy considerations when working with LLMs, such as data privacy, model extraction attacks, and techniques for secure model deployment.
    • Continual Learning for LLMs: Understanding of continual learning techniques for LLMs, such as rehearsal, replay, and parameter isolation, to enable efficient adaptation to new data and tasks.
    • Model Creation and Evaluation: Creating models using training and test datasets, evaluating models using algorithms, and performing hyperparameter tuning.
    • Machine Learning: Proficient in sci-kit learn for supervised and unsupervised ML, including NLP, recommender systems, anomaly detection, and time series.
    • Custom AI Solutions: Experience with custom object detection, speech recognition, image classification, and recognition.
    • Deep Learning: Implementing deep learning scenarios on NVIDIA or GPU-based hardware. Build, train and test SLM/LLM/Model
  • Understand LLM/Model utilization.
    • Select and evaluate LLMs based on use case requirements, cost, performance, and efficiency.
    • Build and apply evaluation and accuracy metrics to ensure optimal model performance.
  • Data Engineering and Management:
    • Design and manage data pipelines for AI applications using AI services like S3, Redshift, and Kinesis.
    • Experience with Python, PyTorch, LangChain, StreamLit, TensorFlow
  • DevOps/LLMOps Skills:
    • Automate and streamline deployment pipelines for AI applications.
    • Implement configuration management and infrastructure as code (IAC)
  • Security and Compliance:
    • Ensure security and compliance of AI solutions on AWS using IAM, KMS, WAF, and other AWS security services.
    • Implement VPC security best practices and maintain security compliance and governance on AWS.
  • Development and Deployment:
    • Develop, train, and deploy machine learning models.
    • Implement AI cognitive services, embedding/vector DB, and Gen AI enterprise integration (e.g., Open AI, Hugging Face).
  • Documentation:
    • Lead AI projects using Agile methodology and Gen AI program management.
    • Prepare comprehensive documentation, including blueprints, explainable AI, and metrics.
  • LLMOps/DevOps:
    • CI/CD automation, configuration management, and cloud solutions deployment.
    • Experience with Lambda, API Gateway, ECS, and monitoring tools like CloudWatch.
  • Data Engineering:
    • Design data pipelines, storage solutions, and database management.
    • Utilize analytics tools like Glue, Athena, and QuickSight.
    • Use of LLM in Data Engineering
  • Security and Compliance:
    • Proficiency in IAM, KMS, and implementing security compliance.

  • AI and Gen AI:
    • Expertise in machine learning and deep learning models, including Transformers and GPT.
    • Experience with AI cognitive services and embedding/vector DB.
    • Gen AI Enterprise Integration (Open AI, Hugging face, SAP, DFSC etc.)
    • Gen AI Use Cases: Conversational, Summarization, Extraction, Document Processing, Task Automation, Enterprise Automation, RPA etc. in different Domains like Sales, Services, Customer Operations, Manufacturing, Logistics etc.
    • Organization Change Management (OCM)
    • Set-up AI CoE and AI Factory
      • Processes, Templates, Formats, Org Structure 
      • Organization Change Management (OCM)

Languages:

  • Proficiency in Python, PyTorch, TensorFlow, Langchain, StreamLit/ChainLit, SQL, and working knowledge of Java.

Certifications (Any of the followings or Any AI Certification)

  • AWS Machine Learning Specialty
  • AWS Data Engineer Associate
  • AWS Certified AI Practitioner

Desired Skills:

  • Strong engineering, coding/development skills.
  • Up-to-date knowledge of the latest developments in the Gen AI world.

This role is crucial for driving Ingram Micro's AI and Gen AI initiatives, ensuring high-quality, scalable solutions that meet both internal and external requirements.

Ingram Micro is committed to creating a diverse environment and is proud to be an equal opportunity employer. We are dedicated to fostering an inclusive and accessible environment where all associates are valued, respected, and supported. We are highly driven by our tenets of success: Results, Integrity, Imagination, Responsibility, Courage, and Talent

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile APIs Architecture ASR Athena AWS CI/CD Classification DALL-E Data pipelines Deep Learning DevOps DynamoDB EC2 ECS Engineering Generative AI GPT GPU Java Kinesis Lambda LangChain LLMOps LLMs Machine Learning ML models Model deployment NLP Pipelines Privacy Prompt engineering Python PyTorch QuickSight RAG Recommender systems Redshift Responsible AI Robotics RPA SageMaker Scikit-learn Security SQL Streamlit TensorFlow Transformers Vertex AI

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this