Data Engineer
Tel Aviv/ Netanya, Israel
JFrog
The JFrog Platform gives you an end-to-end pipeline to control the flow of your binaries from build to production. Power your software updates to the edgeAt JFrog, we’re reinventing DevSecOps to help the world’s greatest companies innovate -- and we want you along for the ride. This is a special place with a unique combination of brilliance, spirit, and all-around great people. Here, if you’re willing to do more, your career can take off. And since software is central to everyone’s lives, you’ll be part of an important mission. Thousands of customers, including the majority of the Fortune 100, trust JFrog to manage, accelerate, and secure their software supply chain from code to production -- a concept we call “liquid software.” Wouldn't it be amazing if you could join us in our journey?
JFrog is looking for an experienced Data Engineer with expertise in cloud technologies, big data, and distributed systems to join our data engineering team.
This role requires the experience and skills to design and build key components and infrastructure for our global data teams (Data engineering, BI, Data science) with experience in designing, building, and maintaining streaming data pipelines and data lake architectures, together with hands-on expertise with technologies like Apache Spark, Kafka, and cloud-based data lake implementations.
As a Data Engineer you will…
- Build Infrastructure to empower our Engineers/Data Scientists/BI teams to work by best practices of data processing
- Work in a high-volume production environment
- Develop and manage ETL/ELT processes for structured and unstructured data
- Collaborate with colleagues both locally and in remote locations
- Influence the software architecture and working procedures for building data and analytics
- Ensure data quality, integrity, and security within the data pipeline and data lake
- Monitor, troubleshoot, and optimize data workflows to improve performance and reliability.
To be a Data Engineer in JFrog you need…
- 3+ years in Data/Backend engineering with experience in designing, developing and optimizing streaming data pipelines using Apache Spark, Kafka, or similar technologies.
- Dealing with data on high volume, high availability production systems
- Hands-on work experience with Python
- Experience with cloud-based data lake architectures (AWS S3, Google Cloud Storage).
- Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code
- Excellent problem-solving skills and the ability to work in a collaborative environment.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Big Data CI/CD Data pipelines Data quality DevOps Distributed Systems ELT Engineering ETL GCP Google Cloud Kafka Pipelines Python Security Spark Streaming Unstructured data
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.