MLOps - Paris
Paris, France
Veesion
Découvrez Veesion, votre logiciel de vidéosurveillance intelligente qui analyse en continu vos caméras. Détectez les vols en temps réel dès maintenantVeesion is at the forefront of in-store theft detection solutions, transforming how retailers protect their products and optimize their operations. Our technology combines advanced video analysis, AI-driven insights, and intuitive user experiences to deliver real-time prevention and actionable intelligence. As we expand, we're seeking an MLOps Engineer to empower Veesion’AI.
You’ll join our growing Data team (currently six people) spanning MLOps, Data Science, and Data Engineering. While the team drives several missions, your focus will be to ensure a robust, scalable, and sane lifecycle for machine learning models at Veesion.
Your work will strike a balance between ad-hoc support for research (e.g. setting up infrastructure to test new modeling ideas) and strengthening our production pipelines (monitoring, optimizing, and automating deployments). You are reporting to the Head of Data and working with:
Research team
Tech Team (10+ engineers) – to ensure smooth integration of ML into our core systems and backend architecture.
Tasks
Scale and Maintain ML Infrastructure
Design and operate infrastructure powering deep learning workflows across 4,000+ stores in 40+ countries, spanning both cloud and edge deployments.
Accelerate and Support Research
Set up distributed compute environments and tools to speed up experimentation and exploration by our research team.
Improve ML Pipelines and Model Prediction Performance
Evolve our training, evaluation, and deployment workflows to ensure reliability, reproducibility, and efficiency. Support model optimization efforts to meet performance and resource constraints across diverse environments.
Ensure Reliable Operations
Maintain internal tools for A/B testing, inference monitoring, and deployment. Promote best practices in testing, versioning, and observability.
Monitor Cost and System Performance
Track infrastructure usage, latency, and model throughput to balance performance, cost, and scalability.
Our Stack
We don’t expect candidates to know everything we use, but here’s a glimpse of the technologies you’ll encounter:
Languages: Python, Bash, Terraform and SQL
Infrastructure: AWS (EC2, Lambda, S3), Docker, Kubernetes
Model Serving & Optimization: ONNX, OpenVINO, TensorRT, Triton Inference Server
Experiment tracking: MLFlow
Workflow Orchestration: Dagster
Monitoring & CI/CD: Prometheus, Grafana, GitHub Actions
Data & Storage: PostgreSQL / RDS, S3, Parquet
Ideal profile
Education: A Master’s degree or equivalent in Software Engineering, Computer Science, or a related field.
Experience: At least 2 years of professional engineering experience (excluding internships), with a solid track record in MLOps or adjacent roles.
Desired Skills
Required Skills:
Python: Confident writing clean, efficient, and production-grade code.Version control: Proficiency with Git for collaborative, traceable development.
Docker: Experience containerizing services and working with isolated environments.
GPU: Familiar with running training and inference on GPU workloads.
Cloud: Practical knowledge of one of the top cloud providers.
Problem-solving: Comfortable navigating ambiguity and proposing practical solutions.
Preferred Skills:
DL: Former experience training and inference of deep learning models.
SQL: Strong command of SQL for transforming data and optimizing workflows.
AWS: Familiarity with AWS services
Orchestration tools: Experience with Dagster, Airflow, or Prefect.
CI/CD & DevOps: Familiarity with GitHub Actions or similar tools to streamline testing and deployment.
Software craftsmanship: Applies software engineering best practices (modularity, testing, clean code) in an MLOps context.
Personal Qualities
Autonomy: Capable of independently initiating and driving projects from concept to implementation.
Adaptability: Thrives in fast-paced, dynamic environments, quickly adjusting to shifting priorities and requirements.
Pragmatism: Delivers practical, actionable solutions that balance technical excellence with business needs.
Curiosity: Continuously seeks out innovative methods, tools, and practices to improve processes and outcomes.
Team Player: Collaborates effectively with cross-functional teams and fosters a culture of shared success and mutual respect.
Interview Process
We strive for an inclusive, structured interview process designed to highlight your technical and problem-solving abilities while providing transparency at each stage.
Initial Screening (45 minutes, remote): A discussion to review your experience, motivations, and alignment with the role.
Technical Interview 1 (1 hours, remote): Interview with the Head of Data and a future team member about reviewing code
Technical Interview 2 (1 hours, remote): Interview with the Head of Data and a future team member to dive deeper into your technical expertise and approach to solving real-world challenges.
Final Interview (1 hour, onsite): Meet with our CEO and the rest of the Data team to discuss cultural fit, expectations, and long-term goals.
Benefits
Competitive Compensation: Receive a salary that reflects your skills and contributions.
Swile Meal Voucher Card: Enjoy meal benefits to make your day easier.
Transportation Subsidy: 50% coverage of your transportation costs.
Comprehensive Health Insurance: Coverage from day one to ensure your well-being.
Inclusive and Supportive Culture: Join a company that values diversity, equity, and inclusion.
Dynamic, Motivating Team: Work with passionate colleagues in an exciting, fast-paced environment.
Rapid Skill Growth: Take on increasing responsibilities and expand your skillset quickly.
Prime Location: Office located in the heart of Paris (Beaubourg).
Flexible Work Policy: Work-from-home arrangement (2-3 days per week).
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: A/B testing Airflow Architecture AWS CI/CD Computer Science Dagster Deep Learning DevOps Docker EC2 Engineering Git GitHub GPU Grafana Kubernetes Lambda Machine Learning MLFlow ML infrastructure ML models MLOps Model inference ONNX Parquet Pipelines PostgreSQL Python Research SQL TensorRT Terraform Testing
Perks/benefits: Career development Competitive pay Equity / stock options Flex hours Health care Home office stipend
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.