Data Engineer

Netanya, Center District, IL

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Applications have closed

Fetcherr

We offer an AI technology based on algo-trading methodologies that empowers our airline partners to redefine the way they price flights to maximize profit and optimize workflow.

View all jobs at Fetcherr

Description

Fetcherr, experts in deep learning, algo, e-commerce, and digitization, is disrupting traditional systems with its cutting-edge AI technology. At its core is the Large Market Model (LMM), an adaptable AI engine that forecasts demand and market trends with precision, empowering real-time decision-making. Specializing initially in the airline industry, Fetcherr aims to revolutionize industries with dynamic AI-driven solutions.

Fetcher is seeking a Data Engineer to build large-scale optimized data pipelines using cutting-edge technology and tools. We're looking for someone with advanced Python skills and a deep understanding of memory and CPU optimization in distributed environments. This is a high-impact role with responsibilities that directly influence the company's strategic decisions and data-driven initiatives.

Key Responsibilities:

  • Build and optimize ETL/ELT workflows for analytics, ML models, and real-time systems
  • Implement data transformation using DBT, SQL, and Python
  • Work with distributed computing frameworks to process large-scale data
  • Ensure data integrity and quality across all pipelines
  • Optimize query performance in cloud-based data warehouses
  • Automate data processes using orchestration tools
  • Monitor and troubleshoot pipeline systems

Requirements

You’ll be a great fit if you have... 

  • 4+ years of hands-on experience building and maintaining production-grade data pipelines at scale
  • Expertise in Python, with strong grasp of data structures, performance optimization, and modern data processing libraries (e.g. pandas, NumPy)
  • Practical experience with distributed computing frameworks such as Dask or Spark, including performance tuning and memory management
  • Proficiency in SQL, with a deep understanding of query optimization, analytical functions, and cost-efficient query design
  • Experience designing and managing transformation logic using dbt and Dask, with a focus on modular development, testability, and scalable performance across large datasets
  • Strong understanding of ETL/ELT architecture, data modeling principles, and data validation
  • Familiarity with cloud platforms (e.g. GCP, AWS) and modern data storage formats (e.g. Parquet, BigQuery, Delta Lake)
  • Experience with CI/CD workflows, Docker, and orchestrating workloads in Kubernetes

Advantages:

  • Dagster or similar orchestration tools
  • Testing frameworks for data workflows (pytest, Great Expectations)
  • Performance optimization skills, especially for Dask/pandas
  • Cross-client solution design focusing on efficiency
  • Software architecture best practices (SOLID principles)

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Architecture AWS BigQuery CI/CD Dagster Data pipelines dbt Deep Learning Docker E-commerce ELT ETL GCP Kubernetes Machine Learning ML models NumPy Pandas Parquet Pipelines Python Spark SQL Testing

Region: Middle East
Country: Israel

More jobs like this