Senior Data Engineer

London, England, United Kingdom

Wrisk

The Wrisk platform allows you to build, launch and operate insurance experiences that your customers will love.

View all jobs at Wrisk

Apply now Apply later

About Wrisk

Wrisk is reinventing insurance for today’s digital consumer and helping an outdated industry become relevant again in the process. In the same way that fintech companies have disrupted the traditional banking sector, reimagining financial platforms for a new generation, Wrisk’s founders share a vision for how insurance ought to be: simple, transparent and personal. Bringing together two disparate industries (technology and insurance), they have created an insurance experience like no other, centred squarely around the customer.

The result is Wrisk: flexible insurance that adapts to fit your life. Our mobile-first, frictionless platform lets people interact with their insurance provider with the same ease, speed and transparency they’re already used to having with providers in other sectors. Customers can pay monthly, instantly make changes to their cover and bring all their disclosure, payment and claim information together in a single place.

Now, with some big brand partners, we are bringing our unique customer experience and platform to market to change how insurance is bought, sold and managed.

What we are looking for…

We're looking for a Senior Data Engineer who thrives on autonomy, is obsessed with clean, scalable data systems, and takes pride in building infrastructure that lasts. You'll be working closely with our existing Senior Data Engineer (your direct report) to tackle ambitious projects across our AWS data stack—designing, building, and improving pipelines, services, and infrastructure to power Wrisk’s growth.

This is a hands-on role for someone who prefers shipping high-quality code over spending time in meetings, and who can translate abstract or high-level problem statements into structured plans and working solutions independently.

You’ll be expected to lead by example—delivering clean, robust code, championing best practices, and raising the bar for technical excellence across the data engineering function.

 

 

What you’ll do…

  • Build and Own Data Infrastructure: Design, implement, and maintain cloud-native data infrastructure on AWS (ECS, ECR, RDS, Redshift, API Gateway, EC2, networking).
  • Develop Data Pipelines: Architect and maintain robust data pipelines and workflow systems using Python and orchestration tools.
  • Develop Services: APIs using fast api, making and deploying simple front ends, making use of modeling techniques and LLMs
  • Infrastructure as code (Terraform): Manage infrastructure as code using Terraform to ensure reproducibility, security, and scale.
  • Solve Ambiguous Problems: Work from abstract requirements to actionable plans and implement solutions independently.
  • Write Clean, Future-Proof Code: Create modular, well-tested code that adheres to industry standards, prioritising long-term maintainability over quick fixes.
  • Ensure Data Integrity and Performance: Optimise queries, schema designs, and storage strategies to ensure reliable and efficient data flows.
  • Improve and Automate: Proactively Identify bottlenecks, improve operational efficiency, and help drive automation across the stack.
  • Collaborate, But Own It: Collaboration is very welcome and encouraged. Be open to ideas and don’t hesitate to champion your own, but at the same time you’ll be trusted to lead your area of work without micromanagement.

Requirements

About You…

  • Minimum of 5 years of experience across Data Engineering and back end development roles.
  • You’re comfortable context switching and know how to find the balance between managing delivery pressure and maintaining quality.
  • Strong Python development experience (you can write well-structured, reusable code).
  • Infrastructure as code proficiency ideally terraform (you understand IaC and have deployed infra this way).
  • Strong analytical capabilities for working with unstructured datasets, performing root cause analysis, and identifying areas for improvement.
  • Strong AWS knowledge, particularly with services like ECS, EC2, ECR, RDS, Redshift, S3, API Gateway, networking and load balancing.
  • Deep understanding of SQL and fundamental database concepts (indexes, schema design, columnar etc).
  • Familiarity with DAG-based orchestration tools (e.g., Airflow or similar).
  • Advanced “data plumbing” skills – ELT ETL, etc
  • Familiarity with containerisation tools such as Docker.

Additional Considerations:

  • Experience in driving growth within an early-stage startup is advantageous but not required.
  • Prior experience in the financial/insurance services sector will be a plus but not required
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs AWS Banking CX Data pipelines Docker EC2 ECS ELT Engineering ETL FinTech LLMs Pipelines Python Redshift Security SQL Terraform

Perks/benefits: Career development Flex hours Startup environment Transparency

Region: Europe
Country: United Kingdom

More jobs like this