Data Engineer
Ashkelon, South District, IL
LSports
LSports provides Innovative sports betting data API for the sports betting industry. We are a leading provider of high-quality live sports data feeds.Description
LSports is the leading global provider of sports data, dedicated to revolutionizing the industry through innovative solutions. We excel in sports data collection and analysis, advanced data management, and cutting-edge services like AI-based sports tips and high-quality sports visualization. As the sports data industry continues to grow, LSports remains at the forefront, delivering real-time solutions. If you share our passion for sports and technology and have the drive to advance the sports-tech and data industries, we invite you to join our team!
If you share our love of sports and tech, you've got the passion and will to better the sports-tech and data industries - join the team! We are looking for a highly motivated Data Engineer.
About the team: Data Integrity
LSports Data Integrity is one of the main pillars of the company's offering and long-term strategy.
We are pushing the boundaries of real-time analysis, utilizing machine learning and artificial intelligence to find the delicate balance between low latency and data accuracy.
Responsibilities:
- Building production-grade data pipelines and services
- Design and Build data-lake/lakehouse
- Taking ownership of major projects from inception to deployment
- Architecting simple yet flexible solutions, and then scaling them as we grow
- Collaborating with cross-functional teams to ensure data integrity, security, and optimal performance across various systems and applications.
- Staying current with emerging technologies and industry trends to recommend and implement innovative solutions that enhance data infrastructure and capabilities.
Requirements
- 3+ years of experience delivering production-grade data pipelines and backend services
- 2+ years of experience using PySpark
- Experience building data pipelines, and working in distributed architectures
- Experience with SQL and NoSQL Databases
- Knowledge and understanding of work in a modern CI environment: Git, Docker, K8S
- Experience with ETL tools: AWS Glue/Apache Airflow/Prefect, etc.
- Experience with Snowflake/Databricks/BigQuery or similar
- Experience with Kafka
- Experience with designing and implementing data lake/warehouse – Advantage
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS AWS Glue BigQuery Databricks Data management Data pipelines Docker ETL Excel Git Kafka Kubernetes Machine Learning NoSQL Pipelines PySpark Security Snowflake SQL
Perks/benefits: Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.