Data Backend Engineer
Tel Aviv-Yafo, Tel Aviv District, IL
Lusha
Trusted by 1.5M+ users, Lusha uncovers business opportunities with accurate, fully compliant global B2B data, AI-driven insights and smart recommendations. Description
Founded in 2016, Lusha grew from a bootstrapped startup to a $1.5B unicorn, backed by $245M in investments and trusted by sales teams at Google, Zendesk, and Yotpo.
Lusha is an AI-powered sales intelligence platform, changing the B2B sales experience with a new approach called Sales Streaming. Allowing salespeople to spend most of their time selling instead of wasting time and effort on manual prospecting.
With 1.5M+ users, 200M+ contacts, and 40K new signups every month, we’re the engine behind modern GTM teams. And we’re just getting started.
Where does this role fit in our vision?
As a Data Backend Engineer, you’ll play a pivotal role in building the data infrastructure that fuels our AI-powered Sales Streaming engine. You’ll help ensure that billions of records flow seamlessly through robust pipelines and reach our users as high-quality, enriched data—enabling sales teams worldwide to sell smarter and faster.
What will you be responsible for?
- Design and build distributed data systems that are the backbone of our product innovation.
- Architect and implement high-throughput data pipelines capable of handling billions of records with speed and reliability.
- Develop custom algorithms for deduplication, data merging, and real-time data updates.
- Optimize storage, indexing, and retrieval strategies to manage massive datasets efficiently.
- Solve deep engineering challenges in distributed computing environments like Spark, EMR, and Databricks.
- Build fault-tolerant, highly available data infrastructure with integrated monitoring and observability.
- Partner closely with ML engineers, backend developers, and product managers to turn business needs into scalable, production-grade features.
Here’s what we need from you:
- 4+ years of hands-on experience in backend or data engineering with a strong record of building production-ready systems.
- Expertise in Python (or Java/Scala), with a deep understanding of data structures, algorithms, and performance trade-offs.
- Proven experience designing and optimizing large-scale distributed pipelines using Spark, EMR, Databricks, Airflow, or Kubernetes.
- Solid command of various storage engines—relational (PostgreSQL, MySQL), document (MongoDB), time-series/search (ClickHouse, Elasticsearch), and key-value stores (Redis).
- Familiarity with workflow orchestration tools like Airflow, Dagster, or Prefect.
- Hands-on experience with message brokers such as Kafka or RabbitMQ for building event-driven systems.
- Strong foundation in software engineering best practices including CI/CD, automated testing, monitoring, and scalable system design.
- Bonus: Experience in building and launching end-to-end data products central to business operations.
- Bonus: Comfortable experimenting with AI tools and large language models for automation and data enrichment.
- AI-savvy: comfortable working with AI tools and staying ahead of emerging trends
We’re dreamers, innovators, and learners, driven by simplicity, collaboration, and trust.
At Lusha, your work matters. Your voice is heard. And your growth is part of our growth.
Ready to join us? Let’s build the future of sales, together.
Requirements
None* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow CI/CD Dagster Databricks Data pipelines Elasticsearch Engineering Java Kafka Kubernetes LLMs Machine Learning MongoDB MySQL Pipelines PostgreSQL Python RabbitMQ Scala Spark Streaming Testing
Perks/benefits: Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.