Senior MLOps Engineer
Sofia, Bulgaria
Deine Aufgaben
The AI teams at neoshare design and builds cutting-edge solutions that transform how our customers collaborate on financing and transaction cases. We turn vast collections of documents into structured insights, empower users to interact with their data in natural language, and enhance transparency, efficiency, and decision-making. Our goal is not just to automate but to elevate—giving our customers greater control, clarity, and even joy in their workflows.
As an MLOps Engineer at neoshare, you will be at the core of scaling AI into production, ensuring that models are efficiently deployed, monitored, and continuously improved. You will work at the intersection of AI and DevOps, designing scalable ML pipelines, automating workflows, and enabling seamless AI operations across teams.
Where your experience is needed
- Design and maintain scalable, reliable, and automated MLOps infrastructure, enabling seamless model deployment, versioning, and monitoring. Build self-service tools that empower AI teams to deploy models efficiently while ensuring high availability and operational excellence.
- Develop and optimize model serving infrastructure for real-time inference, batch processing, and API-based AI services. Ensure low-latency, high-throughput execution across cloud and on-prem environments while collaborating with DevOps to scale AI workloads effectively.
- Establish best practices for AI observability and monitoring, implementing tools to track model drift, accuracy, inference speed, and reliability. Drive continuous improvements in performance and stability, ensuring models operate securely and efficiently in production.
- Foster a culture of technical excellence and collaboration. Share knowledge, refine best practices, and guide teams in adopting cutting-edge MLOps solutions that streamline AI development and deployment.
Dein Profil
- 5+ years of blended industry experience in MLOps, AI infrastructure, or DevOps, with a strong track record of building and scaling machine learning pipelines, deploying models in production, and optimizing AI workflows in cloud environments.
- 2+ years in a role with a main focus on MLOps tasks
- Deep expertise in AWS and Helm deployments, with proficiency in Kubernetes, Docker, and Terraform. Experience in serverless AI architectures and GPU/TPU-accelerated workloads is a plus.
- Extensive hands-on experience in ML model serving frameworks such as TensorFlow Serving, TorchServe, and KFServing, ensuring low-latency, high-throughput AI services for real-world applications.
- Strong background in AI/ML pipeline orchestration, Model Management, and ETL Pipelining with expertise in automating model training, validation, and deployment using tools like Kubeflow Pipelines, MLflow, Dagster, or Prefect.
- Passion for streamlining MLOps workflows and enabling AI teams to iterate and deploy seamlessly.
Warum wir?
- Flexible working hours: Manage your workday autonomously.
- neoshare-Health: We offer our employees additional health insurance with dental coverage and a Multisport card.
- Remote-Work: While our beautiful Sofia office is always open, we make it possible to work remotely with no fixed office days.
- Equipment: Our employees can choose their hardware (between MacBook Pro and Lenovo).
- Vacation: We offer 26 days paid leave.
- Bonus: We offer a 13th salary in December.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture AWS Dagster DevOps Docker ETL GPU Helm Kubeflow Kubernetes Machine Learning MLFlow ML infrastructure MLOps Model deployment Model training Pipelines TensorFlow Terraform
Perks/benefits: Career development Flex hours Flex vacation Health care Salary bonus Transparency
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.