DataOps Engineer

Tel Aviv-Jaffa, Tel Aviv District, IL

Wiliot

Wiliot is a Sensing as a Service platform powered by IoT Pixels

View all jobs at Wiliot

Apply now Apply later

Description

Wiliot was founded by the team that invented one of the technologies at the heart of 5G. Their next vision was to develop an IoT sticker, a computing element that can power itself by harvesting radio frequency energy, bringing connectivity and intelligence to everyday products and packaging, things previously disconnect from the IoT. This revolutionary mixture of cloud and semiconductor technology is being used by some of the world’s largest consumer, retail, food and pharmaceutical companies to change the way we make, distribute, sell, use and recycle products.

Our investors include Softbank, Amazon, Alibaba, Verizon, NTT DoCoMo, Qualcomm and PepsiCo. 

We are seeking an experienced DataOps Engineer to streamline and optimize our data operations, enabling robust and scalable data workflows. This role involves working at the intersection of data engineering, DevOps, and infrastructure to design, implement, and manage automated, reliable, and high-performance data systems.

Responsibilities

  • Data Pipeline Development and Maintenance: Design, deploy, and optimize automated, scalable data pipelines using platforms like Databricks and Apache Spark.
  • Infrastructure Management: Build and maintain resilient cloud infrastructure on AWS, GCP, to support data engineering and analytics workflows.
  • Infrastructure as Code (IaC): Automate infrastructure provisioning and configuration management using tools like Terraform ,Terragrunt.
  • Database Administration: Manage, optimize, and secure SQL and NoSQL databases, ensuring high availability and performance for analytics and transactional systems.
  • Containerization and Orchestration: Deploy and manage containerized applications and data processing jobs using Kubernetes
  • Monitoring and Reliability: Implement monitoring and observability tools to ensure data pipelines, platforms, and infrastructure are reliable, scalable, and performant.
  • Collaboration with Teams: Work closely with data engineers, analysts teams to ensure seamless integration between data workflows and infrastructure.


Requirements

  • Cloud Platforms: Proficiency in AWS and GCP services and best practices.
  • Infrastructure as Code: at least 3 years of experience with Terraform for automating infrastructure deployments.
  • Containerization: Strong knowledge of Docker and Kubernetes for container orchestration.
  • Data Processing: Familiarity with data processing frameworks such as Apache Spark, Apache Flink, and platforms like Databricks.
  • Database Management: Experience with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., Cassandra) databases.
  • High-Load Systems: Proven experience managing and optimizing high-load production environments.
  • Scripting and Automation: Proficiency in scripting languages such as Python or Bash for automation tasks.
  • Version Control: Experience with GitHub for version control and CI/CD pipelines.
  • Monitoring Tools: Experience with monitoring and logging tools like Prometheus, Grafana stack.
  • Security Best Practices: Knowledge of implementing security measures and compliance standards.


#LI-Hybrid

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: AWS Cassandra CI/CD Databricks DataOps Data pipelines DevOps Docker Engineering Flink GCP GitHub Grafana Kubernetes MySQL NoSQL Pharma Pipelines PostgreSQL Python Security Spark SQL Terraform

Region: Middle East
Country: Israel

More jobs like this