Lead AI/ML Engineer (P4368)

Cincinnati, OH; Chicago, IL

84.51°

At 84.51° we use unmatched 1st party retail data and analytics powered by cutting edge science to fuel a more customer-centric journey.

View all jobs at 84.51°

Apply now Apply later

84.51° Overview:

84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.

Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.

Join us at 84.51°!

__________________________________________________________

 

Lead AI/ML Engineer (AI Enablement) (P4368)

Cincinnati / Chicago

SUMMARY

The Lead AI/ML Engineer requires a unique mix of software engineering and AI skills necessary to create, deploy and maintain computationally efficient proprietary SLM, LLM, and embedding model implementations, serving infrastructure, and end-to-end solutions. This role has a specific focus on the models serving and operations within our foundation models team. A strong understanding of distributed systems, model serving architectures, GPU cluster management, and MLOps best practices that will scale across enterprise workloads and large-scale model deployments is critical to success.

RESPONSIBILITIES

  • Lead large-scale foundation model projects that can span months, focusing on model serving, inference optimization, and production deployment
  • Foster a collaborative and innovative team environment, encouraging professional growth and development among junior team members in foundation model technologies
  • Leverage known patterns, frameworks, and tools for automating & deploying foundation model serving solutions using Triton, vLLM, and other inference engines
  • Develop new tools, processes and operational capabilities to monitor and analyze foundation model performance, latency, throughput, and resource utilization
  • Work with researchers and ML engineers to optimize and scale foundation model serving using best practices in distributed systems, GPU orchestration, and MLOps
  • Abstract foundation model serving solutions as robust APIs, microservices, or components that can be reused across the business with high availability and low latency
  • Build, steward, and maintain production-grade foundation model serving infrastructure (robust, reliable, maintainable, observable, scalable, performant) to manage and serve LLMs, SLMs, and embedding models at scale
  • Research state-of-the-art foundation model serving technologies, inference optimization techniques, and distributed GPU architectures to identify new opportunities for implementation across the enterprise
  • Design and implement distributed GPU clusters for model training and inference workloads across GCP and Azure cloud environments
  • Understand business requirements and trade-off latency, cost, throughput, and model accuracy to maximize value and translate research into production-ready serving solutions
  • Reduce time to deployment, automate foundation model CI/CD pipelines, implement continuous monitoring of model serving metrics, and establish feedback loops for model performance
  • Responsible for code reviews, infrastructure reviews, and production readiness assessments for foundation model deployments
  • Apply appropriate documentation, version control, infrastructure as code practices, and other internal communication practices across channels
  • Make time-sensitive decisions and solve urgent production issues in foundation model serving environments without escalation

QUALIFICATIONS, SKILLS, AND EXPERIENCE

Required:

  • Bachelor's degree or higher in Machine Learning, Computer Science, Computer Engineering, Applied Statistics, or related field
  • 5+ years of experience developing cloud-based software solutions with understanding of design for scalability, performance, and reliability in distributed systems
  • 2+ years hands-on experience with foundation models (LLMs, SLMs, embedding models) in production environments; 2+ years of experience in model serving and inference optimization preferred
  • Deep knowledge of foundation model serving frameworks, particularly Triton Inference Server and vLLM
  • Working experience with PyTorch models and optimization for inference (quantization, pruning, ONNX, TensorRT)
  • Knowledge of distributed GPU computing, CUDA programming, and GPU memory optimization techniques
  • Hands-on experience with GCP and Azure cloud platforms, including GPU instances, managed services, and networking
  • Experience with Databricks for large-scale data processing and model training workflows
  • Knowledge of vector databases and embedding model serving
  • Strong experience with open-source LLM fine-tuning frameworks (LoRA, QLoRA, full fine-tuning)
  • Experience building large-scale model serving solutions that have been successfully delivered to production with enterprise SLAs
  • Excellent communication skills, particularly on technical topics related to distributed systems and model serving architectures
  • Kubernetes & Docker experience with focus on GPU workloads and model serving deployments
  • CI/CD Pipeline experience with focus on ML model deployment; GitHub Actions experience preferred
  • Terraform experience for infrastructure as code, particularly for GPU clusters and cloud ML infrastructure
  • Strong skills in Python, with experience in async programming and high-performance computing
  • API development experience with focus on high-throughput, low-latency model serving endpoints
  • Experience with monitoring and observability tools for distributed systems (Prometheus, Grafana, DataDog, etc.)
  • Knowledge of E2E Machine Learning pipeline and MLOps tools (model registry, experiment tracking, feature stores, model monitoring) in the context of foundation models

Preferred:

  • Experience with distributed training frameworks such as DeepSpeed, FSDP, FairScale
  • Knowledge of model compression techniques and hardware acceleration
  • Experience with multi-cloud deployments and hybrid cloud architectures
  • Familiarity with emerging foundation model architectures and serving optimizations

#LI-SSS

 

Pay Transparency and Benefits

  • The stated salary range represents the entire span applicable across all geographic markets from lowest to highest.  Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
  • Below is a list of some of the benefits we offer our associates:
    • Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
    • Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
    • Happiness: Hybrid work environment. Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.

 

Pay Range$91,000—$218,750 USD
Apply now Apply later

Tags: API Development APIs Architecture Azure CI/CD Computer Science CUDA Databricks Distributed Systems Docker Engineering FSDP GCP GitHub GPU Grafana Kubernetes LLMs LoRA Machine Learning Microservices ML infrastructure MLOps Model deployment Model training ONNX Open Source Pipelines Python PyTorch Research Statistics TensorRT Terraform vLLM

Perks/benefits: Career development Competitive pay Health care Medical leave Parental leave Startup environment

Region: North America
Country: United States

More jobs like this