Senior Data Engineer
Tel Aviv-Yafo, Tel Aviv District, Israel
Autofleet
The end-to-end software platform for optimized, reliable and sustainable transportation services for fleets and mobility operatorsWe are making the future of Mobility come to life starting today.
At Autofleet we support the world’s largest vehicle fleet operators and transportation providers to optimize existing operations and seamlessly launch new, dynamic business models - driving efficient operations and maximizing utilization.
At the heart of our platform lies the data infrastructure, driving advanced machine learning models and optimization algorithms. As the owner of data pipelines, you'll tackle diverse challenges spanning optimization, prediction, modeling, inference, transportation, and mapping.
As a Senior Data Engineer, you will play a key role in owning and scaling the backend data infrastructure that powers our platform—supporting real-time optimization, advanced analytics, and machine learning applications.
What You'll Do
- Design, implement, and maintain robust, scalable data pipelines for batch and real-time processing using Spark, and other modern tools.
- Own the backend data infrastructure, including ingestion, transformation, validation, and orchestration of large-scale datasets.
- Leverage Google Cloud Platform (GCP) services to architect and operate scalable, secure, and cost-effective data solutions across the pipeline lifecycle.
- Develop and optimize ETL/ELT workflows across multiple environments to support internal applications, analytics, and machine learning workflows.
- Build and maintain data marts and data models with a focus on performance, data quality, and long-term maintainability.
- Collaborate with cross-functional teams including development teams, product managers, and external stakeholders to understand and translate data requirements into scalable solutions.
- Help drive architectural decisions around distributed data processing, pipeline reliability, and scalability.
Requirements
- 4+ years in backend data engineering or infrastructure-focused software development.
- Proficient in Python, with experience building production-grade data services.
- Solid understanding of SQL
- Proven track record designing and operating scalable, low-latency data pipelines (batch and streaming).
- Experience building and maintaining data platforms, including lakes, pipelines, and developer tooling.
- Familiar with orchestration tools like Airflow, and modern CI/CD practices.
- Comfortable working in cloud-native environments (AWS, GCP), including containerization (e.g., Docker, Kubernetes).
- Bonus: Experience working with GCP
- Bonus: Experience with data quality monitoring and alerting
- Bonus: Strong hands-on experience with Spark for distributed data processing at scale.
- Degree in Computer Science, Engineering, or related field.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow AWS CI/CD Computer Science Data pipelines Data quality Docker ELT Engineering ETL GCP Google Cloud Kubernetes Machine Learning ML models Pipelines Python Spark SQL Streaming
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.