Senior Data Engineer

Ramat Gan, Tel Aviv District, IL

Apply now Apply later

Description

Who are we?

PlaxidityX is a global leader in the automotive cyber security industry. We protect drivers & manufacturers from cyber attacks on their vehicles. We use top notch technology & have several products for inside & outside the car

The Data Team at PlaxidityX

We are the backbone of data operations at PlaxidityX, entrusted with managing every aspect of data flow within the organization - from data ingestion and processing to generating valuable insights. Currently comprised of two highly skilled data engineers, we are eager to welcome a third member who shares our passion for transforming data into actionable intelligence.

Why PlaxidityX ?

  • You can be part of a leading company in the automotive industry
  • You can help save lives
  • You can work with cool challenging technology
  • You can make an impact & help change the world

Responsibilities: 

  • Lead development projects of critical, high-availability, cloud-scale services and APIs
  • Support clients with large amounts of data and scalability in mind
  • Take part in all development stages – from design to deployment
  • Develop and deploy real time/batch data processing pipelines using the latest technologies
  • Design and build high-availability, cloud-scale data pipelines (ETLs)

Requirements

  • +3 years of experience in large scale, distributed server side, backend development
  • +3 years of experience developing using Scala
  • Extensive experience in stream & batch big data pipeline processing using Apache Spark
  • Experience with Linux, Docker, and Kubernetes
  • Experience in working with cloud providers (e.g., AWS, GCP)
  • A team player, highly motivated and a fast learner
  • Ability to assume ownership of goals and products
  • Passion for designing scalable, distributable and robust platforms and analytic tools

Advantages

  • Experience developing using NodeJS (preferably TypeScript), Python, Groovy
  • Experience in stream & batch big data pipeline processing using Apache Flink
  • Experience with Kafka, Airflow, Mongo, Elastic, HDFS or similar technologies
  • Experience with system monitoring (Prometheus, Influx or similar)
  • Experience in independently managing development projects from scratch to production
  • Experience in microservices architecture and flexible system design
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture AWS Big Data DataOps Data pipelines Docker ETL Flink GCP HDFS Kafka Kubernetes Linux Microservices Node.js Pipelines Python Scala Security Spark TypeScript

Perks/benefits: Flex hours

Region: Middle East
Country: Israel

More jobs like this