[Future opening] Senior MLOps Engineer

Remote job

GetInData

Our core activity is big data software development, we build a modern data platforms for session analytics, recommendations, paternities matching and anomaly detections in real-time. We also teach how to use modern real-time Big Data...

View all jobs at GetInData

Apply now Apply later

About us

GetInData | Part of Xebia is a leading data company working for international Clients, delivering innovative projects related to Data, AI, Cloud, Analytics, ML/LLM, and GenAI. The company was founded in 2014 by data engineers and today brings together 120 Data & AI experts. Our Clients are both fast-growing scaleups and large corporations that are industry leaders. In 2022, we joined forces with Xebia Group to broaden our horizons and bring new international opportunities.

What about the projects we work with?

We run a variety of projects in which our sweepmasters can excel. Advanced Analytics, Data Platforms, Streaming Analytics Platforms, Machine Learning Models, Generative AI and more. We like working with top technologies and open-source solutions for Data & AI and ML/AI. In our portfolio, you can find Clients from many industries, e.g., media, e-commerce, retail, fintech, banking, and telcos, such as Truecaller, Spotify, ING, Acast, Volt, Play, and Allegro. You can read some customer stories here.

What else do we do besides working on projects?

We conduct many initiatives like Guilds and Labs and other knowledge-sharing initiatives. We build a community around Data & AI, thanks to our conference Big Data Technology Warsaw Summit, meetup Warsaw Data Tech Talks, Radio Data podcast, and
DATA Pill newsletter.


Data & AI projects that we run and the company's philosophy of sharing knowledge and ideas in this field make GetInData | Part of Xebia not only a great place to work but also a place that provides you with a real opportunity to boost your career.

If you want to be up to date with the latest news from us, please follow up on our LinkedIn profile.


About role
We are happy to announce that we are currently looking for an MLOps Engineer! This role is crucial to our company, and we are seeking candidates with outstanding skills and experience. Although there isn’t an immediate project available, we invite you to connect with us to discuss potential future opportunities.


MLOps Engineer is responsible for streamlining machine learning project lifecycles by designing and automating workflows, implementing CI/CD pipelines, ensuring reproducibility, and providing reliable experiment tracking. They collaborate with stakeholders and platform engineers to set up infrastructure, automate model deployment, monitor models, and scale training. MLOps Engineers possess a wide range of technical skills, including knowledge of orchestration, storage, containerization, observability, SQL, programming languages, cloud platforms, and data processing. Their expertise also covers various ML algorithms and distributed training in environments like Spark, PyTorch, TensorFlow, Dask, and Ray. MLOps Engineers are essential for optimizing and maintaining efficient ML processes in organizations.


Responsibilities

  • Collaborating with Platform Engineers to set the infrastructure required to run MLOps processes efficiently

  • Implementing ML workflows / automating CI/CD pipelines

  • Automating model deployment and implementing model monitoring.

  • Collaborating with Platform Engineers to implement backup and disaster recovery processes for ML workflows, especially models and experiments

  • Collaborating with stakeholders to understand the key challenges and inefficiencies of Machine Learning project lifecycles within the company

  • Keeping abreast of the latest trends and advancements in data engineering and machine learning

Requirements

  • Proficiency in Python, as well as experience with scripting languages like Bash or PowerShell

  • Knowledge of at least one orchestration and scheduling tool, for example, Airflow, Prefect, Dagster, etc

  • Understanding of ML algorithms and distributed training, e.g., Spark / PyTorch / TensorFlow / Dask / Ray

  • Experience with cloud services (Azure / AWS / GCP)

  • Experience in frameworks such as Databricks

  • Familiarity with tools like MLflow, W&B, and Neptune AI from the operations perspective

  • Experience with containerization technologies like Docker and basic knowledge of container orchestration platforms like Kubernetes

  • Understanding of continuous integration and continuous deployment (CI/CD) practices, as well as experience with related tools like GitHub Actions or GitLab CI

We offer
  • Salary: 160 - 200 PLN net + VAT/h B2B (depending on knowledge and experience)

  • 100% remote work

  • Flexible working hours

  • Possibility to work from the office located in the heart of Warsaw

  • Opportunity to learn and develop with the best Big Data experts

  • International projects

  • Possibility of conducting workshops and training

  • Certifications

  • Co-financing sport card

  • Co-financing health care

  • All equipment needed for work

Apply now Apply later
Job stats:  8  2  0

Tags: Airflow AWS Azure Banking Big Data CI/CD Dagster Databricks Docker E-commerce Engineering Excel FinTech GCP Generative AI GitHub GitLab Kubernetes LLMs Machine Learning MLFlow ML models MLOps Model deployment Open Source Pipelines Python PyTorch Spark SQL Streaming TensorFlow Weights & Biases

Perks/benefits: Flex hours Team events

Region: Remote/Anywhere

More jobs like this