Senior Machine Learning Engineer

Warsaw, Masovian Voivodeship, Poland

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Globaldev Group

Globaldev drives businesses to maximize their potential with our custom software development and staff augmentation services.

View all jobs at Globaldev Group

Apply now Apply later

We’re looking for a highly skilled, independent, and driven Machine Learning Engineer to lead the design and development of our next-generation real-time inference services - the core engine powering Start.io’s algorithmic decision-making at scale. This is a rare opportunity to own the system at the heart of our product, serving billions of daily requests across mobile apps, with tight latency and performance constraints.

You’ll work at the intersection of machine learning, large-scale backend engineering, and business logic, building robust services that blend predictive models with dynamic, engineering logic - all while maintaining extreme performance and reliability requirements.

What you’ll do?

  • Own and lead the design and development of low-latency Algo inference services handling billions of requests per day
  • Build and scale robust real-time decisioning engines, integrating ML models with business logic under strict SLAs
  • Collaborate closely with DS to deploy models seamlessly and reliably in production
  • Design systems for model versioning, shadowing, and A/B testing at runtime
  • Ensure high availability, scalability, and observability of production systems
  • Continuously optimize latency, throughput, and cost-efficiency using modern tooling and techniques
  • Work independently while interfacing with cross-functional stakeholders from Algo, Infra, Product, Engineering, BA & Business.

What are we looking for?

  • B.Sc. or M.Sc. in Computer Science, Software Engineering, or a related technical discipline
  • 5+ years of experience building high-performance backend or ML inference systems
  • Deep expertise in Python and experience with low-latency APIs and real-time serving frameworks (e.g., FastAPI, Triton Inference Server, TorchServe, BentoML)
  • Experience with scalable service architecture, message queues (Kafka, Pub/Sub), and async processing
  • Strong understanding of model deployment practices, online/offline feature parity, and real-time monitoring
  • Experience in cloud environments (AWS, GCP, or OCI) and container orchestration (Kubernetes)
  • Experience working with in-memory and NoSQL databases (e.g. Aerospike, Redis, Bigtable) to support ultra-fast data access in production-grade ML services
  • Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and best practices for alerting and diagnostics
  • A strong sense of ownership and the ability to drive solutions end-to-end
  • Passion for performance, clean architecture, and impactful systems

Why join us?

  • Lead the mission-critical inference engine that drives our core product
  • Join a high-caliber Algo group solving real-time, large-scale, high-stakes problems
  • Work on systems where every millisecond matters, and every decision drive real value
  • Enjoy a fast-paced, collaborative, and empowered culture with full ownership of your domain
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: A/B testing APIs Architecture AWS BentoML Bigtable Computer Science Engineering FastAPI GCP Grafana Kafka Kubernetes Machine Learning ML models Model deployment NoSQL Python Testing

Region: Europe
Country: Poland

More jobs like this