Senior Software Engineer, AI/ML

Orem, Utah, United States

Apply now Apply later

 

Fishbowl is an industry leading, top supplier of manufacturing and warehouse management software for small, medium, and enterprise sized businesses across 40+ verticals. While our mission is to deliver amazing software, service, training, and support to our customers to help them grow and scale their business operations, our passion is helping people. Whether you are new to owning and operating a business, or you have been at it for 20+ years, Fishbowl provides simplicity and flow for business owners and makes it easier for them to focus on what they love most, running their business.

To support the mission of Fishbowl, we have recently partnered with Diversis Capital to invest in Fishbowl’s growth and market scale. We are well on our way to developing exciting new cloud-based products that will continue to surprise and delight our existing and future customers. We also have exciting plans to expand our efforts internationally and are focused on building a globally oriented team that will allow us to scale our operations and future market growth potential.

 

The Role

 

 

Fishbowl is seeking a Sr. Platform Software Engineer (AI/ML) to join our engineering team focused on building next-generation, AI-powered platform capabilities for our Inventory, Warehouse Management, and Manufacturing SaaS applications. This senior position offers a unique opportunity to work at the intersection of applied machine learning, intelligent planning systems, and scalable cloud architecture.

You will take the lead on technical initiatives involving LLM orchestration, machine learning, reinforcement learning, and decision systems that optimize resource allocation and inventory control across complex supply chain environments. Your work will be central to advancing our platform's intelligence, enabling customers to operate more efficiently, make predictive decisions, and adapt to rapidly changing conditions.

A core part of this role involves engineering context-efficient LLM systems to deliver AI augmented features to improve user efficiency and accuracy.  You'll also lead the implementation of Model Context Protocol (MCP) servers to enable AI agents to integrate and take actions on behalf of the user against our platform API’s.  Additionally, you'll design and optimize Retrieval-Augmented Generation (RAG) systems that allow agents to pull precise, relevant information into prompts without bloating token usage.

This is a high-impact role ideal for an AI engineer with SaaS platform experience who wants to own architecture, influence strategic direction, and drive innovation across a multi-product ecosystem. You will report directly to the Chief Architect and collaborate cross-functionally with product, cloud infrastructure, and development teams to build scalable, observable, and reusable AI systems in production.

Remote or hybrid work available (Orem, UT HQ). We emphasize outcomes over geography.

Responsibilities

 

 

  •  LLM-Oriented System Design: Lead architecture of intelligent agent infrastructure that integrates LLMs into real-time SaaS workflows across manufacturing and inventory control.
  •  Model Context Protocol (MCP): Design and implement MCP servers against our product API’s to enable user’s agents and application features to take action.
  •  State-Driven Prompting (MDP-Style): Emulate Markov Decision Process patterns to reduce prompt bloat and improve agent determinism—updating agent state explicitly before generating each prompt.
  •  Retrieval-Augmented Generation (RAG): Build and optimize RAG pipelines using vector search and semantic indexing to enrich prompts with highly relevant external knowledge while minimizing token overhead.
  •  Orchestration and Prompt Control: Develop context managers, orchestrators, and prompt pipelines using tools like LangGraph, LangChain, or custom orchestration layers.
  •  Agentic Workflow Architecture: Design and implement advanced agentic workflows using LLMs and orchestration platforms like N8N, enabling intelligent multi-step automation across system boundaries.
  •  LLM + N8N Integration Patterns: Define and maintain standards for integrating LLM calls, MCP state management, and external system actions into reusable workflow nodes in N8N or equivalent orchestration tooling.
  •  Workflow Extensibility and Governance: Establish secure, scalable patterns for authoring, deploying, and monitoring LLM-based agents within orchestration platforms—supporting reusability across product lines and customers.
  •  Inventory and Resource Optimization:  Design, deliver and maintain reinforcement learning or optimization-driven microservices that support production planning, order allocation, and supply-side control logic.
  •  Cloud-Native AI Deployment: Deploy ML models and LLM services using AWS technologies such as Bedrock, EKS/ECS, Lambda, Step Functions, and SageMaker with best-in-class cost and performance practices.
  •  Evaluation & Observability: Build telemetry and monitoring layers for context size, hallucination rate, fallback behavior, and performance of AI-enhanced workflows.
  •  Platform Enablement: Lead enablement of AI features across other product teams through documentation, internal libraries, and education on MCP usage and context management.

Requirements

 

 

  •  AI/ML Depth: 5+ years of experience applying machine learning to production systems, including at least 2 years building optimization, RL, or planning algorithms.
  •  LLM Experience: Minimum 1 year working with large language models in production, including prompt orchestration, RAG, fine-tuning, or LangChain/LangGraph.
  •  MDP & RL Mastery: Grounding in Markov decision processes, policy/value learning, and frameworks such as Ray RLlib, OpenAI Gym, or Stable-Baselines3.
  •  AI/LLM Tooling Familiarity: You have hands-on experience with emerging AI and LLM tooling. You explore new libraries, IDE’s, orchestration frameworks, and developer tools (e.g., LangGraph, Semantic Kernel, DSPy, Hugging Face) to improve workflows, build faster prototypes, and stay at the forefront of applied AI.
  •  MCP Implementation: Experience implementing or consuming Model Context Protocol servers or equivalent architecture for decision-serving and action-enabling APIs.
  •  Cloud Based Development: Proficient in building and deploying on AWS using Terraform, Docker, and modern CI/CD pipelines.
  •  Programming Expertise: Proficient in Python (preferred for ML) and one of Java/C#/TypeScript for backend services.
  •  SaaS Systems: Deep experience building distributed, multi-tenant SaaS systems and integrating ML into production workflows.
  •  Leadership and Communication: Outstanding leadership qualities with strong verbal and written communication skills, suitable for both technical and executive audiences.
  •  Commitment to Learning and Mentorship: A proven track record in mentoring engineers and contributing to the growth of technical teams and individuals.

 

 

 

Education/Experience

 

 

  •  Typically requires a minimum of 8 years of related experience with a Bachelor’s degree in Computer Science, Engineering, or equivalent; or 6 years and a Master’s degree; or a PhD with 3 years experience; or equivalent experience.

Qualities

 

 

What other characteristics do we look for?  Leadership for sure.  But what does that mean?  Well, some of the attributes we appreciate include:

  •  Inquisitiveness
  •  Having pride in one’s work
  •  Tenacity: trying to work it out but knowing when to ask for help
  •  Follow-thru and dependability
  •  A strong belief in the team’s success 
  •  Most importantly, friendly/kind/a good teammate
  •  Demonstrable examples of leading teams/organizations, driving overall architectural direction, establishing best-practices and patterns and having an org-wide impact, as opposed to just a specific team

  

 

 

 

 

 

Benefits

 

 

  •  Flexible PTO with no accrual needed allowing employees the time they need away from work
  •  Multiple healthcare options to choose from including PTO and HSA options with matching company contributions to an employee’s HSA account
  •  Paid parental leave
  •  401K matching
  •  On-site gym, company paid lunches, fully stocked snack bins and refrigerators in the office (anyone want a Monster to drink?)
  •  A team environment where people want to work from the office, but enjoy the freedom to work from anywhere
  •  and much more

E-Verify

 

Fishbowl participates in the Electronic Employment Verification Program. Please visit https://www.e-verify.gov/sites/default/files/everify/posters/EVerifyParticipationPoster.pdf for more information.

EEO

 

 

Fishbowl provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.

This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.

ADA

 

 

Fishbowl is committed to providing access, equal opportunity, and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities.

 

 

 

 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture AWS CI/CD Computer Science Docker ECS Engineering Java Lambda LangChain LLMs Machine Learning Microservices ML models OpenAI PhD Pipelines Prompt engineering Python RAG Reinforcement Learning SageMaker Step Functions Terraform TypeScript

Perks/benefits: Career development Fitness / gym Flex hours Flex vacation Parental leave Startup environment Team events

Region: North America
Country: United States

More jobs like this