Senior Data Engineer

London

Apply now Apply later

 Senior Data Engineer 

 

  

Who We Are and What We Do  

 

At SilverRail, we're on a mission to reshape the way the world travels, and we're inviting you to be part of this journey. Rail is becoming the go-to choice for short and medium-haul travel worldwide, and we're here to help make it happen. 

  

In the face of the ongoing climate crisis, our vision is crystal clear. We are transforming the online customer experience for rail travel, making it easier than ever for customers to find, buy and use rail. Our cutting-edge technology is the backbone of rail and travel agencies worldwide, making it effortless for travellers to choose the eco-friendly option and reduce their carbon footprint.  

  

We have more than 15 years of trailblazing success behind us, and our teams are spread across the globe, with bases in London, Boston, Brisbane, and Stockholm. We thrive on the philosophy of 'fail-fast-fail-early,' which drives us to find ingenious solutions to complex challenges. 

  

Join us, and help shape the future of travel! 

 

 

  

 

The Role  

 

As a Senior Data Engineer, you will lead the design, development, and optimisation of scalable data systems and pipelines. You will play a pivotal part in building robust data infrastructure that empowers analytics, machine learning, and business intelligence across the organisation. You’ll work with massive datasets, both in batch and real-time, and be responsible for the reliability, performance, and scalability of our data ecosystem. 
 

 

 

Key Responsibilities  

 

  • Architect, develop, and maintain scalable data infrastructures including relational databases, data lakes, and cloud-based data warehouses (e.g., Redshift, BigQuery, Snowflake). 
  • Work with big data technologies (e.g., Hadoop, Spark), NoSQL databases (e.g., MongoDB, Cassandra), and cloud platforms (AWS, Azure, Google Cloud). 
  • Handle real-time streaming data using tools like Apache Kafka. 
  • Design real-time streaming systems that ingest, process, and analyse data continuously with minimal latency, using event brokers like Apache Kafka. 
  • Handling event-driven architectures where data changes are captured and processed immediately for real-time analytics or triggering actions. 
  • Work with stream processing frameworks such as Apache Flink, Apache Spark Streaming to perform transformations, aggregations, and validations on streaming data. 
  • Implementing scalable, fault-tolerant streaming pipelines that maintain data integrity despite high throughput and velocity. 
  • Design, implement, and optimize ETL/ELT pipelines to collect, clean, transform, and load data from various internal, external, structured and unstructured sources. 
  • Build and maintain scalable data architectures (e.g. data lakes/data warehouses). 
  • Collaborate with software engineers, product managers, platform engineers and support teams to ensure data accessibility, quality, and integrity. 
  • Work with our Security Manager to maintain data governance and security best practices. 
  • Monitor, troubleshoot, and optimise the performance of data pipelines and systems, using tools such as Kafka. 
  • Participate in code reviews and follow best practices for software engineering and DevOps in data environments. 
  • Work with AWS cloud platform to manage data infrastructure. 
  • Build robust, automated data workflows using orchestration tools such as Apache Airflow, ensuring dependency management, scheduling, and error handling. 
  • Manage large-scale datasets, optimise storage strategies, and ensure efficient querying and data retrieval performance. 
  • Create self-healing, anomaly-detecting pipelines that ensure data reliability and freshness across environments. 
  • Apply CI/CD practices and version control (Git) to data pipelines, promoting robust, testable, and collaborative data engineering workflows. 
  • Implement real-time data processing to support real-time analytics and decision-making. 
  • Handle ingestion, transformation, and enrichment of streaming data, ensuring fault tolerance and scalability. 

 

  

 

Required Competence and Skills 

 

  • Bachelor's degree in Computer Science, Data Engineering, Information Systems or a related field. 
  • 5-7 years experience in software engineering, with 3+ years of experience in data engineering or similar role. 
  • Strong proficiency in SQL, including MySQL and PostgreSQL, proficiency with NoSQL databases and knowledge of Kafka and Java coding. 
  • Experience with data pipeline tools and frameworks (e.g., Apache Airflow, dbt, Luigi). 
  • Knowledge of cloud-native streaming tools like Amazon Kinesis 
  • Familiarity with data storage and warehousing technologies (e.g., Snowflake, Redshift, BigQuery, Databricks). 
  • Experience working with cloud platforms, ideally with AWS using Docker and Kubernetes. 
  • Strong understanding of data modelling, normalisation, and performance optimisation. 
  • Knowledge of real-time data processing (Kafka, Spark Streaming, Flink). 
  • Familiarity with CI/CD pipelines and infrastructure as code (Terraform, CloudFormation). 
  • Experience with Git management tools 
  • Ability to communicate and collaborate clearly and effectively  
  • Strong time management skills with the ability to prioritise workloads under pressure and meet deadlines    
  • A self-starter who is hands-on and knows how to find answers and work with ambiguity.    
  • Values-driven and practical in your approach.   
     

 

  

 

Why us?  

 

 

  • We utilise a hybrid working model, providing equipment for home working alongside one or two monthly visits to our beautiful central London office.  

 

  • We offer a highly competitive benefits package including private healthcare and rail discounts. 

 

  • We provide a wealth of career development opportunities with training that is individual, focused on improving your skills and helping you become the best professional you can be.  

 

  • Our team’s health and wellness is genuinely important to us, so we offer a number of wellbeing seminars and membership to the #1 leading meditation app. 

 

  • A unique opportunity to work for a tech company that is helping the environment by revolutionising the way we travel.  

 

  

 

Our values are simple: Do Good by working for a better tomorrow; Think Big Act Smart by being curious, adaptable and data-driven; and remember that through collaboration we will always be Stronger Together.  

 

  

 

*We are a neurodiverse employer and are working hard to improve our recruitment processes, so if there is any way that we can make the recruitment experience better for you then please let us know in your application - all information will be treated as strictly confidential*  

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Azure Big Data BigQuery Business Intelligence Cassandra CI/CD CloudFormation Computer Science CX Databricks Data governance Data pipelines dbt DevOps Docker ELT Engineering ETL Flink GCP Git Google Cloud Hadoop Java Kafka Kinesis Kubernetes Machine Learning MongoDB MySQL NoSQL Pipelines PostgreSQL RDBMS Redshift Security Snowflake Spark SQL Streaming Terraform

Perks/benefits: Career development Flex hours Gear Health care

Region: Europe
Country: United Kingdom

More jobs like this