Senior Data Engineer – Production Data Engineering Team

Netherlands - Remote

Ebiquity

Ebiquity harnesses the power of data to provide independent, fact-based advice, enabling brand owners to perfect media investment decisions and improve business outcomes.

View all jobs at Ebiquity

Apply now Apply later

Senior Data Engineer – Production Data Engineering Team

Department: Production and Solutions

Employment Type: Permanent - Full Time

Location: Netherlands - Remote


Description

About Ebiquity and Our Team
Ebiquity is the leading independent marketing and media consultancy, helping global brands optimize their media investments.

Our Production Data Engineering team is at the heart of this mission, transforming vast amounts of data into actionable insights that drive better decision-making. We build scalable, high-performance data pipelines that ensure data quality, efficiency, and reliability.

Why Join Us?
  • End-to-End Data Engineering – Get hands-on experience with data extraction, transformation, load, warehousing, and visualization—not just one piece of the puzzle.
  • Exposure to Cutting-Edge Technologies – Work with the latest developments in modern cloud-based data platforms.
  • Collaborative, Agile & High-Performing Team – Join a team that not only values knowledge-sharing and continuous learning but also follows modern agile methodologies and DevOps practices. We are proud to be a high-performing team, consistently excelling according to DORA metrics.
  • Constant Challenges & Learning Opportunities – Embrace a role where you are constantly challenged to innovate and grow, with ongoing opportunities to expand your skill set and explore new ideas.
  • Work-Life Balance & Flexibility – We support a hybrid working model, offering the flexibility to work from home while staying connected with the team in the office.
  • Impact-Driven Work – Your contributions will directly influence how global brands make media investment decisions, giving your work real-world impact.

Key Responsibilities

  • Develop & optimize ETL pipelines to process large-scale data efficiently.
  • Implement CI/CD pipelines to automate and improve deployment processes.
  • Work with data modeling, warehousing, and integration strategies.
  • Ensure data quality, governance, and monitoring are maintained and enhanced.
  • Support and mentor junior engineers, fostering a culture of knowledge-sharing and best practices.
  • Contribute to the scalability and performance of our data infrastructure.
  • Participate in Agile development, contributing to sprint planning and team discussions.
  • Engage in architectural discussions, ensuring long-term maintainability of data solutions.
  • Proactively identify opportunities for automation and process improvements. 

Skills, Knowledge & Expertise

Must-Have
  • Proficiency in Python & SQL for data processing and transformation.
  • Experience with ETL processes, data warehousing, and data modeling.
  • Hands-on experience with cloud-based data platforms (Azure, AWS, or GCP).
  • CI/CD knowledge and best practices for data engineering workflows.
  • Agile & DevOps experience, understanding modern data development lifecycles.
  • Ability to write clean, maintainable, and well-documented code.
  • Strong problem-solving skills and a proactive mindset.
  • Excellent communication and collaboration skills, capable of working both independently and within a team.
Nice-to-Have
  • Experience with PySpark and Databricks.
  • Full-stack development experience, particularly with Django (UI & backend).
  • Knowledge of data visualization tools and their integration with data pipelines.
  • Familiarity with monitoring, logging, and alerting for data systems.
  • Exposure to data governance and security best practices.
Who You Are
  • Driven and Ambitious: You are motivated to succeed in building highly scalable data environments and have a proven track record of successful ETL/DWH projects.
  • Excellent Communicator: You communicate clearly and effectively, able to explain complex issues and articulate your work with confidence.
  • Curious and Creative: You embrace openness to experience—constantly exploring new ideas and innovative solutions.
  • Team-Oriented: A true team player, you thrive within collaborative environments and actively contribute to improving team processes.
  • Strong Problem Solver: You possess excellent problem-solving skills, efficiently troubleshooting data pipeline issues and finding optimal solutions.
  • Accountable and Transparent: You take full ownership of your work, consistently communicating progress and challenges.
  • Positive Attitude: You bring energy and enthusiasm to the team, fostering a supportive and dynamic work atmosphere.

Job Benefits

  • Based in the Netherlands – candidates must be located within the country.
  • We have a great office directly next to the Utrecht central station.
  • Hybrid working model: flexible work-from-home and office presence.
  • Collaborative and supportive environment that encourages growth and innovation.
  • Opportunity to work with modern data technologies in a fast-paced, agile environment.
Interested? Apply Today!
If you are passionate about data engineering, automation, and building scalable data solutions, we would love to hear from you!

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile AWS Azure CI/CD Databricks Data governance Data pipelines Data quality Data visualization Data Warehousing DevOps Django Engineering ETL GCP Pipelines PySpark Python Security SQL

Perks/benefits: Career development Flex hours

Regions: Remote/Anywhere Europe
Country: Netherlands

More jobs like this