Data Engineer

London, England, United Kingdom - Remote

Raft

Building the Unimagined

View all jobs at Raft

Apply now Apply later

Raft is the intelligent logistics platform that’s rewriting the technology playbook for freight forwarders and customs brokers in the automation era. A dynamic UK-based technology company with a global impact across logistics, we’re searching for a Software Engineer who is excited by the prospect of working in a rapidly growing international scale-up. We have significant runway thanks to our most recent Series B funding, which we raised from some of the best investors in the space: Eight Roads (Alibaba, Spendesk, Toast), Bessemer Venture Partners (LinkedIn, Twilio, Shopify), Episode 1 (Zoopla, Betfair, Shazam) and Dynamo Ventures (Sennder, Stord, Gatik).

As a Data Engineer  you will have a significant impact on both our Engineering and Machine Learning teams, utilizing your experience and subject matter expertise to resolve the data centric challenges we face. You will focus on building data pipelines, storing and processing data within lakes, warehouses and databases, and make key decisions on our infrastructure, architecture and other solutions pertinent to data.

Day-to-day you will:

  • Build, maintain and expand data pipelines to efficiently automate data processing and flexibly collect data from different sources
  • Design, setup and maintain databases, data warehouses, data lakes built for our user-facing applications and our internal ML platform
  • Build analytics of raw data through dashboards

Understand the team’s main challenges and utilize your knowledge to resolve them, implementing solutions in a team setting from scratch

Requirements

We specifically want someone who:

  • Has strong proficiency in Python and other programming skills, writing clean, easy to maintain, and scalable code
  • Has hands-on experience and is up to date with the recent approaches in data and model versioning, data warehousing, and processing, e.g. BigQuery, DBT, Spark
  • Has experience with common Data Engineering tools like Airflow and Airbyte
  • Has solid experience with NoSQL, SQL databases, e.g. MongoDB, PostgreSQL, Redis
  • Has experience with containerization and deployments, including Docker, Kubernetes, Helm, Terraform, Cloud Providers (GCP or others)
  • Is able to work with a variety of data types, such as JSON, CSV, Parquet and more
  • Is creative and shares ideas over pipeline architecture and wider infrastructure

Apply because you want to...

  • Have the opportunity to work in a global market and compete with best in class companies who are on the front line of Machine Learning and Engineering developments
  • Work in a modern Product-led company where your contributions are valued and have real-world impact
  • Get exposure to working with stakeholders on a global level across different industries
  • Work in a tech, fast-paced and challenging environment that provides opportunities for professional and personal growth
  • Work in a diverse and multicultural environment
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Airflow Architecture BigQuery CSV Data pipelines Data Warehousing dbt Docker Engineering GCP Helm JSON Kubernetes Machine Learning MongoDB NoSQL Parquet Pipelines PostgreSQL Python Spark SQL Terraform

Perks/benefits: Career development

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this