Fabric Data Engineer

London, London, United Kingdom

WTW

WTW tarjoaa tietoon perustuvia, näkemyslähtöisiä ratkaisuja ihmisten, riskien ja pääoman alalla.

View all jobs at WTW

Apply now Apply later

At WTW, we are a leading global advisory, broking, and solutions company. We work with clients across a wide range of industries, helping them manage risk, optimise benefits, and improve performance. As a Fabric Data Engineer, you will play a key role in leveraging Microsoft Fabric, Azure, and Python to design and build advanced data solutions in the insurance domain.

Location: London, UK
Role: Hybrid Workstyle (Full-time)

Role Overview:

As a Fabric Data Engineer at WTW, you will take ownership of developing and optimising data pipelines, workflows, and ETL processes. You will work with cutting-edge technologies to ensure that data is efficiently processed, stored, and made accessible for analysis. This role is a key part of our data engineering team and requires specific expertise in Microsoft Fabric, Azure, and Python.

Key Responsibilities:

Fabric or Azure Data Engineer (Non-Negotiable):

  • Lead the design and development of scalable data pipelines and ETL processes using Microsoft Fabric or Azure technologies.
  • Manage and optimise notebooks, pipelines, and workflows to enhance the performance and efficiency of our data architecture.
     

Data Pipeline Development & ETL:

  • Build and maintain high-quality ETL pipelines to clean, transform, and enrich data from various sources.
    Ensure that pipelines are automated, scalable, and fault-tolerant to accommodate large volumes of data.
    Experience with Notebooks, Pipelines, and Workflows:
  • Utilise Notebooks (e.g., Jupyter, Databricks) for data exploration, analysis, and reporting.
    Design and optimise data workflows to streamline key processing tasks, enhancing operational efficiency.
    API Integration & Data Ingestion:
  • Integrate external and internal APIs to ingest data into our systems, ensuring smooth and consistent data integration.
    Automate the API data ingestion processes to enhance data consistency and quality.
     

AI Experience (Project-based):

  • Contribute to projects involving AI, including integrating generative AI or machine learning models within our data workflows.
  • Apply AI technologies to improve data processing and provide deeper insights.

SDLC Awareness:

  • Adhere to Software Development Life Cycle (SDLC) best practices, including version control, testing, and continuous integration.
  • Collaborate with the team to ensure code quality, review processes, and deployment practices. 

Collaboration & Communication:

  • Work closely with cross-functional teams and business stakeholders to understand and meet data requirements.
  • Effectively communicate complex technical solutions to both technical and non-technical teams, ensuring alignment with business goals.

Required Qualifications:

Experience:

  • A minimum of 2 years of hands-on experience working as a Data Engineer or Fabric Data Engineer, with expertise in Microsoft Fabric, Azure, and Python.
  • Proven experience in designing and implementing ETL pipelines, managing notebooks, and optimising data workflows.
  • Solid experience working with API integration and data ingestion from various sources.
     

Technical Skills:

  • Proficiency in Python for building data pipelines, automation, and data manipulation.
  • Expertise in Azure cloud services and Microsoft Fabric.
  • Knowledge of ETL processes, data modelling, and data integration techniques.
  • Understanding of AI technologies and experience working on AI-driven projects (advantageous).
  • Familiarity with Power Automate for automating business processes (optional).
     

Soft Skills:

  • Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
  • Excellent problem-solving abilities, with a focus on data quality and continuous improvement.
    Self-motivated and organised, with the ability to manage time effectively and prioritise tasks.

Preferred Qualifications:

Industry Experience:

  • Experience in the insurance domain or a strong understanding of industry-specific data requirements would be highly beneficial.
  • Experience with Reporting & Data Visualisation:
  • While not mandatory, familiarity with reporting tools or data visualisation (e.g., Power BI) is advantageous.

Why Join WTW?

  • Make an Impact: Work on high-profile data engineering projects that shape the future of the insurance sector.
  • Innovation: Join a forward-thinking company that uses cutting-edge technologies, including AI, Azure, and Microsoft Fabric.
  • Career Development: Take the next step in your career and gain exposure to exciting data engineering challenges.
  • Global Reach: Collaborate with a diverse, global team on projects that span multiple industries.

 

 

At WTW, we believe difference makes us stronger. We want our workforce to reflect the diverse and varied markets we operate in and to foster a culture of inclusivity where all colleagues feel welcome, valued, and empowered to bring their whole selves to work every day. We are an equal opportunities employer committed to fostering an inclusive work environment throughout our organisation. We embrace all types of diversity.

 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture Azure Databricks Data pipelines Data quality Engineering ETL Generative AI Jupyter Machine Learning ML models Pipelines Power BI Python SDLC Testing

Perks/benefits: Career development

Region: Europe
Country: United Kingdom

More jobs like this