Senior Data Engineer

Bratislava, Slovakia

Freshfields Bruckhaus Deringer

The world’s biggest international organisations rely on us to help them make the right decisions in a fast-changing world. We combine the knowledge, experience and energy of the whole firm to solve our clients’ most complex challenges, wherever...

View all jobs at Freshfields Bruckhaus Deringer

Apply now Apply later

Starting Monthly Salary: €3,800 per month (gross)

Role summary/purpose of job

The Data Engineer will play a critical role in designing, building, and maintaining the organization's modern data infrastructure. This position is responsible for developing scalable data lake and lakehouse solutions using Microsoft Azure, Databricks, and Delta Lake, while ensuring data governance, lineage, and quality through tools like Collibra.

The ideal candidate will have a strong foundation in cloud-native data engineering and a passion for creating robust, high-performance data pipelines that support analytics, business intelligence, and machine learning initiatives. Working closely with data architects, analysts, and governance teams, the Data Engineer will help drive a unified, trusted, and accessible enterprise data platform—empowering data-driven decision-making across the organization.

Function Overview

We are seeking an experienced and forward-thinking Data Engineer with at least 5 years of experience to lead the development of our cloud-native data ecosystem. In this role, you will architect and maintain scalable data lakehouse solutions using Microsoft Azure, Databricks, and Delta Lake, while driving data quality and governance through Collibra.

You’ll work at the intersection of engineering, analytics, and governance—building robust data pipelines with PySpark, SQL, and Azure Data Factory, integrating modern tools like Apache Airflow, dbt (data build tool), and Git-based CI/CD workflows. You'll play a key role in implementing data cataloging, lineage tracking, and compliance frameworks to support trusted data access across the enterprise.

This is a hands-on opportunity to shape a cloud-first, scalable data infrastructure, collaborating with cross-functional teams to enable real-time insights, advanced analytics, and AI/ML capabilities. If you're passionate about building reliable data platforms using modern data stack technologies and driving data democratization across an organization, this role is for you

Key Responsibilities and Deliverables

  • Design and Develop Data Solutions: Build and maintain scalable, secure, and high-performance data lake and lakehouse architectures using Azure Data Lake, Databricks, and Delta Lake.

  • Data Pipeline Development: Create and optimize batch and streaming ETL/ELT pipelines using Azure Data Factory, Databricks (PySpark/SQL), Apache Spark, and orchestration tools such as Apache Airflow.

  • Data Governance and Metadata Management: Implement and support data governance frameworks using Collibra for data cataloging, lineage tracking, metadata enrichment, and policy enforcement.

  • Enable Data Accessibility: Ensure data is discoverable, well-documented, and accessible to stakeholders through self-service tools and integration with BI platforms.

  • Performance and Cost Optimization: Monitor and tune data processing jobs and storage solutions for performance, scalability, and cost-effectiveness in the cloud environment.

  • Collaboration Across Teams: Work closely with data architects, business analysts, data scientists, and compliance teams to translate data requirements into engineering solutions.

  • CI/CD and DevOps for Data: Implement automated testing, deployment, and version control for data pipelines using tools like Git, Terraform, and Azure DevOps.

  • Ensure Data Quality and Reliability: Integrate data validation, logging, and monitoring into pipelines to support data quality assurance and rapid troubleshooting.

  • Documentation and Knowledge Sharing: Create and maintain technical documentation, data dictionaries, and best practices for team collaboration and knowledge transfer.

Key Requirements

Experience & Background

  • At least 5 years of hands-on experience in data engineering or related fields.

  • Proven track record in designing and implementing data lake or lakehouse architectures at scale.

  • Experience working in enterprise cloud environments, preferably in Microsoft Azure.

Technical Skills

  • Expertise with Azure Data Services, including:

    • Azure Data Lake Storage (Gen2)

    • Azure Synapse Analytics

    • Azure Data Factory

    • Azure Functions / Logic Apps

  • Proficiency in Databricks (including Spark, PySpark, Delta Lake, and SQL).

  • Strong programming skills in Python and SQL; experience with Scala is a plus.

  • Familiarity with Apache Spark, Airflow, and streaming frameworks (e.g., Spark Structured Streaming, Azure Event Hubs).

  • Experience implementing and managing data cataloging and governance tools such as Collibra, Purview, or similar.

Modern Data Stack Tools

  • Experience with dbt (data build tool) for transformation and modeling.

  • Familiarity with CI/CD practices and tools such as Git, Terraform, Azure DevOps, or GitHub Actions.

  • Knowledge of containerization and orchestration (e.g., Docker, Kubernetes) is a plus.

Data Management & Governance

  • Understanding of data quality, metadata management, lineage tracking, and access control frameworks.

  • Experience working in regulated industries (e.g., legal, financial services, healthcare) and familiarity with compliance requirements like GDPR, HIPAA, or SOX is beneficial.

Soft Skills

  • Strong problem-solving, analytical, and troubleshooting abilities.

  • Excellent verbal and written communication skills to collaborate across technical and non-technical teams.

  • Comfortable working in Agile or Scrum environments.

  • A proactive mindset with a passion for continuous learning and innovation in the data space.

Inclusion

Freshfields is an equal opportunities employer and all applications received by the firm will be considered on the basis of their merit alone. We welcome applications from all suitably qualified individuals regardless of background. All offers of employment will be conditional on the candidate having/securing the right to work in the UK in the role in question and providing the firm with evidence of that right (as required by the Immigration, Asylum and Nationality Act 2006) prior to employment commencing.

Freshfields is a Ban the Box employer. We ask applicants to disclose criminal convictions only if and when a conditional job offer is made. A conviction does not automatically lead to withdrawal of the offer: we make decisions on a case-by-case basis and take a number of relevant factors into account (e.g. the role you are applying for and the circumstances of the offence). You would have the opportunity to discuss the matter with us before we make a decision.

Apply now Apply later
Job stats:  2  0  0
Category: Engineering Jobs

Tags: Agile Airflow Architecture Azure Business Intelligence CI/CD Databricks Data governance Data management Data pipelines Data quality dbt DevOps Docker ELT Engineering ETL Git GitHub Kubernetes Machine Learning Pipelines PySpark Python Scala Scrum Spark SQL Streaming Terraform Testing

Perks/benefits: Career development

Region: Europe
Country: Slovakia

More jobs like this