Machine Learning Engineer

Remote job

Xebia

Leading global technology consultancy providing strategy, software engineering, advanced training, and managed services to help businesses thrive in the AI-enabled digital era.

View all jobs at Xebia

Apply now Apply later

Hello, let’s meet!

We are Xebia - a place where experts grow. For nearly two decades now, we've been developing digital solutions for clients from many industries and places across the globe. Among the brands we’ve worked with are UPS, McLaren, Aviva, Deloitte, and many, many more.

We're passionate about Cloud-based solutions. So much so, that we have a partnership with three of the largest Cloud providers in the business – Amazon Web Services (AWS), Microsoft Azure & Google Cloud Platform (GCP). We even became the first AWS Premier Consulting Partner in Poland.

Formerly we were known as PGS Software. In 2021, we joined Xebia Group – a family of interlinked companies driven by the desire to make a difference in the world of technology.

Xebia stands for innovation, talented team members, and technological excellence. Xebia means worldwide recognition, and thought leadership. This regularly provides us with the opportunity to work on global, innovative projects.

Our mission can be captured in one word: Authority. We want to be recognized as the authority in our field of expertise.

What makes us stand out? It's the little details, like our attitude, dedication to knowledge, and the belief in people's potential - emphasizing every team members development. Obviously, these things are not easy to present on paper – so make sure to visit us to see it with your own eyes!

Now, we've talked a lot about ourselves – but we'd love to hear more about you.

Send us your resume to start the conversation and join the #Xebia.

You will be:

  • working with data scientists and analysts to create and deploy new models and ML systems,
  • implement end-to-end solutions across the full breadth of ML model development lifecycle, and working hand in hand with the scientists from the point of data exploration for model development to the point of building features, ML pipelines and deploying them in production,
  • working on batch and real time models, and operational support,
  • establishing scalable, efficient, automated processes for data analyses, model
  • development, validation and implementation,
  • writing efficient and scalable software to ship products in an iterative, continual-release environment,
  • writing optimized data pipelines to support machine learning models,
  • contributing to and promoting good software engineering practices across the team and build cloud native software for ML pipelines,
  • contributing to and re-using community best practice.

Requirements

Your profile:

  • ability to start immediately,
  • openness to work daily between till 19.00 pm CET,
  • university or advanced degree in engineering, computer science, mathematics, or a related field
  • 3+ years' experience developing and deploying machine learning systems into production,
  • experience working with big data tools: Spark, Hadoop, Kafka, etc.,
  • experience with at least one cloud provider solution (AWS, GCP, Azure) and understanding of serverless code development (GCP experience preferred),
  • efficiency with object-oriented/object function scripting languages (Python required),
  • efficiency with Python data-handling libraries like Pandas or Pyspark,
  • efficiency in SQL for data consumption and transformation,
  • expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, continuous deployment, code reviews, design documentation,
  • working experience with native ML orchestration systems such as Kubeflow, Vertex AI Pipelines, Airflow, TFX,
  • very good verbal and written communication skills in English.

Work from the European Union region and a work permit are required.


Nice to have:

  • experience in working with SparkSQL, BigQuery SQL dialects,
  • relevant working experience with Docker and Kubernetes,
  • knowledge of data pipeline and workflow management tools,
  • expertise in data engineering, analysis and processing (e.g. designing and
  • maintaining ETLs, validating data and detecting quality issues)
  • knowledge in statistics and machine learning,
  • previous experience developing predictive models in a production environment, MLOps and model integration into larger scale applications.


Recruitment Process:

CV review – HR call – InterviewClient Interview – Decision

Apply now Apply later
Job stats:  2  0  0

Tags: Airflow AWS Azure Big Data BigQuery Computer Science Consulting Data pipelines Docker Engineering ETL GCP Google Cloud Hadoop Kafka Kubeflow Kubernetes Machine Learning Mathematics ML models MLOps Pandas Pipelines PySpark Python Spark SQL Statistics Testing TFX Vertex AI

Perks/benefits: Career development

Region: Remote/Anywhere

More jobs like this