Senior Data Engineer (Remote)

Remote, United Kingdom

Circana

Circana business tools provide in-depth consumer behavior data, industry trends, and expert analysis of market research to drive business growth.

View all jobs at Circana

Apply now Apply later

 

At Circana, we are fueled by our passion for continuous learning and growth, we seek and share feedback freely, and we celebrate victories both big and small in an environment that is flexible and accommodating to our work and personal lives. We have a global commitment to diversity, equity, and inclusion as we believe in the undeniable strength that diversity brings to our business, employees, clients, and communities. With us, you can always bring your full self to work. Join our inclusive, committed team to be a challenger, own outcomes, and stay curious together. Circana is proud to be Certified™ by Great Place To Work®. This prestigious award is based entirely on what current employees say about their experience working at Circana.

Learn more at www.circana.com.

 

What will you be doing?

We are seeking a skilled and motivated Data Engineer to join a growing global team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a desire to make a significant impact, we encourage you to apply!

Job Responsibilities

  • ETL/ELT Pipeline Development:
    • Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow.
    • Implement batch and real-time data processing solutions using Apache Spark.
    • Ensure data quality, governance, and security throughout the data lifecycle.
  • Cloud Data Engineering:
    • Manage and optimize cloud infrastructure (Azure) for data processing workloads, with a focus on cost-effectiveness.
    • Implement and maintain CI/CD pipelines for data workflows to ensure smooth and reliable deployments.
  • Big Data & Analytics:
    • Develop and optimize large-scale data processing pipelines using Apache Spark and PySpark.
    • Implement data partitioning, caching, and performance tuning techniques to enhance Spark-based workloads.
    • Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives.
  • Workflow Orchestration (Airflow):
    • Design and maintain DAGs (Directed Acyclic Graphs) in Apache Airflow to automate complex data workflows.
    • Monitor, troubleshoot, and optimize job execution and dependencies within Airflow.
  • Team Leadership & Collaboration:
    • Provide technical guidance and mentorship to a team of data engineers in India.
    • Foster a collaborative environment and promote best practices for coding standards, version control, and documentation.

Requirements

  • Client facing role so strong communication and collaboration skills are vital
  • Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow.
  • Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code.
  • Deep understanding of Spark internals, including RDDs, DataFrames, DAG execution, partitioning, and performance optimization techniques.
  • Experience with designing and managing Airflow DAGs, scheduling, and dependency management.
  • Knowledge of CI/CD pipelines, containerization technologies (Docker, Kubernetes), and DevOps principles applied to data workflows.
  • Excellent problem-solving skills and a proven ability to optimize large-scale data processing tasks.
  • Prior experience in leading teams and working in Agile/Scrum development environments.
  • A track record of working effectively global remote teams

Desirable:

  • Experience with data modelling and data warehousing concepts.
  • Familiarity with data visualization tools and techniques.
  • Knowledge of machine learning algorithms and frameworks.

 

 

Circana Behaviours

As well as the technical skills, experience and attributes that are required for the role, our shared behaviours sit at the core of our organization. Therefore, we always look for people who can continuously champion these behaviours throughout the business within their day-to-day role:

  • Stay Curious: Being hungry to learn and grow, always asking the big questions.
  • Seek Clarity: Embracing complexity to create clarity and inspire action.
  • Own the Outcome: Being accountable for decisions and taking ownership of our choices.
  • Center on the Client: Relentlessly adding value for our customers.
  • Be a Challenger: Never complacent, always striving for continuous improvement.
  • Champion Inclusivity: Fostering trust in relationships engaging with empathy, respect, and integrity.
  • Commit to each other: Contributing to making Circana a great place to work for everyone.

Location

This position can be located in the following area(s): Bracknell

 

 

#LI-KM1

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow Azure Big Data CI/CD Data pipelines Data quality Data visualization Data Warehousing DevOps Docker ELT Engineering ETL Kubernetes Machine Learning Pipelines PySpark Python Scrum Security Spark SQL

Perks/benefits: Career development Flex hours

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this