Senior Data Engineer

Tel Aviv-Yafo, Tel Aviv District, IL

Zesty

Kubernetes optimization platform. Accelerate performance and cost-efficiency across every layer of your Kubernetes environment.

View all jobs at Zesty

Apply now Apply later

Description

Position Overview:

We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.

What You'll Own:

  • Design and implement scalable ETL pipelines using Apache Spark and related technologies.
  • Build robust data services to support multiple internal teams, including product and analytics.
  • Architect end-to-end data solutions and translate them into actionable engineering plans.
  • Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
  • Collaborate closely with product teams to understand data needs and co-create solutions.
  • Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
  • Participate in code reviews, architecture discussions, and mentor less experienced engineers.

Requirements

  • 6+ years of experience building and maintaining production-grade ETL pipelines.
  • Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
  • Proven ability to design systems that support diverse data consumers with varying SLAs.
  • Deep understanding of data modeling, distributed systems, and cloud infrastructure.
  • Strong background in Apache Spark (PySpark or Scala).
  • Familiarity with microservices architectures and clean API/data contracts.
  • Excellent communication and collaboration skills — you're proactive, approachable, and solution-oriented.
  • Ability to think in systems: conceptualize high-level architecture and break it into components.

Nice to Have

  • Knowledge of data governance, lineage, and observability best practices.
  • Experience with real-time streaming technologies (e.g., Kafka, Flink).
  • Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
  • Previous experience developing customer-facing data products or analytics tools.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture CI/CD Databricks Data governance Data quality dbt DevOps Distributed Systems Engineering ETL Flink Kafka Microservices Pipelines PySpark Scala Spark Streaming

Region: Middle East
Country: Israel

More jobs like this