Senior Machine Learning Operations Engineer

Tel Aviv, Israel

Honeycomb Insurance

At Honeycomb Insurance we're simplifying the real estate insurance process, making it easier to find better coverage at a fraction of the cost

View all jobs at Honeycomb Insurance

Apply now Apply later

At Honeycomb, we're not just building technology , we’re reshaping the future of insurance. 

In 2025, Honeycomb  was ranked by Newsweek as one of “America’s Greatest Startup Workplaces,” and Calcalist named it as a “Top 50 Israel startup.” 

How did we earn these honors?

Honeycomb is a rapidly growing global startup, generously backed by top-tier investors and powered by an exceptional team of thinkers, builders, and problem-solvers. Dual-headquartered in Chicago and Tel Aviv (R&D center), and with 5 offices across the U.S., we are reinventing the commercial real estate insurance industry, an industry long overdue for disruption. Just as importantly, we ensure every employee feels deeply connected to our mission and one another.

With over $55B in insured assets, Honeycomb operates across 18 major states, covering 60% of the U.S. population and increasing its coverage.

If you’re looking for a place where innovation is celebrated, culture actually means something, and smart people challenge you to be better every day -  Honeycomb might be exactly what you’ve been looking for.

 

 About The Role:

We’re looking for an Senior  MLOps Engineer to take the lead in scaling and maintaining the infrastructure that powers our production models and machine learning workflows. You’ll work closely with data scientists and engineers to ensure our ML systems are fast, reliable, and always improving.

What You’ll Do:

  •  Work closely with a high-performing ML team to bring new models to production.
  • Build, deploy, and maintain ML models (vision and LLMs) in production across multiple cloud environments (GCP, Modal.com).
  • Own the orchestration of real-time and batch ML pipelines using Dagster.
  • Optimize resource-intensive workloads (CPU/GPU/memory), ensuring performance and cost-efficiency.
  • Partner with data scientists to take models from prototype to production.
  • Develop automated systems for training, testing, and deploying ML models.
  • Maintain clear, interactive monitoring dashboards (e.g., Streamlit) to track model performance and drift.
  • Continuously evaluate and integrate new tools and technologies to support scaling, faster development, and reliability.
  • Contribute to building future infrastructure for agentic workflows, decision models, and document analysis.

Basic Requirements

  • Proven experience deploying and maintaining ML models in production.
  • Strong Python development skills and comfort working with large, messy codebases.
  • Deep understanding of managing memory- and compute-intensive workloads (including GPU environments).
  • Familiarity with containerization (Docker) and orchestration tools (Kubernetes, preferred).
  • Experience with ML serving frameworks (e.g., TorchServe, BentoML, FastAPI).
  • Hands-on experience with CI/CD for ML workflows (e.g., GitHub Actions, Terraform).
  • Knowledge of cloud environments, especially GCP, and modern ML infrastructure.

Beyond the Resume: A Culture Champion

  • You're passionate and motivated to ensure Honeycomb’s real-time decision systems work flawlessly every day and get significantly better every week.
  • You’re excited about working closely with a high-performing ML team in order to bring new models to production.
  • You constantly drive for innovation —exploring new tools, technologies, and ways to scale our platform.

 


 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: BentoML CI/CD Dagster Docker FastAPI GCP GitHub GPU Kubernetes LLMs Machine Learning ML infrastructure ML models MLOps Pipelines Python R R&D Streamlit Terraform Testing

Perks/benefits: Startup environment

Region: Middle East
Country: Israel

More jobs like this