Data Engineer
Tel Aviv-Yafo, Tel Aviv District, IL
CHEQ
Secure your data and analytics, on-site conversion, and paid marketing from bots and invalid users with CHEQ, the leading Go-to-Market Security platform.Description
CHEQ is the global leader in Go-to-Market Security, trusted by over 15,000 customers worldwide to protect every aspect of their marketing, sales and data operation from bots, fake users, fraud and cyber attacks.
Powered by award-winning cybersecurity technology, CHEQ offers the broadest suite of solutions for securing the entire funnel, from paid marketing to on-site conversion, data, and analytics.
CHEQ is a global company with offices in Tel-Aviv, New York, Tokyo, London
We are seeking an experienced Data Engineer with strong expertise in columnar databases and database engineering to design and maintain our main big data database and pipelines This role involves significant hands-on database engineering and DBA work within an AWS environment. The ideal candidate will have a deep understanding of columnar databases and experience in handling big data environments.
Responsibilities:
- Design, implement, and maintain robust, scalable data pipelines and database solutions.
- Optimize and manage large-scale data systems, focusing on columnar databases, specifically ClickHouse.
- Collaborate with various stakeholders across the company like Product managers, developers and Data Scientists in order to deliver team tasks with high quality.
- Handle database engineering tasks, including schema design, query optimization, and database administration in AWS environments.
- Provide support for big data analytics by ensuring the reliability and efficiency of data storage and retrieval processes.
Requirements
- 4+ years of hands-on experience as a Data Engineer, with a focus on database engineering and administration.
- Proven expertise with columnar databases (ClickHouse experience is a significant advantage) -Must
- Experience building and optimizing ‘big data’ pipelines, architectures, and datasets.
- Experience with data pipeline technologies such as Spark, Kafka, Hadoop, Amazon Kinesis, or Apache Airflow is a plus.
- Working knowledge of Python (for data-related implementation tasks).
- Innovative, proactive, and independent, with strong problem-solving skills.
- A team player with a “can-do” attitude.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS Big Data Data Analytics Data pipelines Engineering Hadoop Kafka Kinesis Pipelines Python Security Spark
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.