Data Engineer
Tel Aviv-Yafo, Tel Aviv District, IL
LSports
LSports provides Innovative sports betting data API for the sports betting industry. We are a leading provider of high-quality live sports data feeds.Description
LSports is the leading global provider of sports data, dedicated to revolutionizing the industry through innovative solutions. We excel in sports data collection and analysis, advanced data management, and cutting-edge services like AI-based sports tips and high-quality sports visualization. As the sports data industry continues to grow, LSports remains at the forefront, delivering real-time solutions. If you share our passion for sports and technology and have the drive to advance the sports-tech and data industries, we invite you to join our team!
If you share our love of sports and tech, you've got the passion and will to better the sports-tech and data industries - join the team! We are looking for a highly motivated Data Engineer.
About the team: Data Integrity
LSports Data Integrity is one of the main pillars of the company's offering and long-term strategy.
We are pushing the boundaries of real-time analysis, utilizing machine learning and artificial intelligence to find the delicate balance between low latency and data accuracy.
Responsibilities:
- Building production-grade data pipelines and services
- Design and Build data-lake/lakehouse
- Taking ownership of major projects from inception to deployment
- Architecting simple yet flexible solutions, and then scaling them as we grow
- Collaborating with cross-functional teams to ensure data integrity, security, and optimal performance across various systems and applications.
- Staying current with emerging technologies and industry trends to recommend and implement innovative solutions that enhance data infrastructure and capabilities.
Requirements
- 3+ years of experience delivering production-grade data pipelines and backend services
- 2 + years of experience using Spark/Presto/ Trino or similar
- Experience building data pipelines, and working in distributed architectures
- Experience with SQL and NoSQL Databases
- Experience with designing and implementing data lake/warehouse
- Experience with Snowflake/Databricks/Sagemaker or similar – Advantage
- Knowledge and understanding of work in a modern CI environment: Git, Docker, K8S - An advantage
- Experience with ETL tools: AWS Glue, Apache Airflow, etc... - An advantage
- Experience with Kafka - Advantage
- Experience with Kafka Connect (Java) - Advantage
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS AWS Glue Databricks Data management Data pipelines Docker ETL Excel Git Java Kafka Kubernetes Machine Learning NoSQL Pipelines SageMaker Security Snowflake Spark SQL
Perks/benefits: Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.