Senior Data Engineer
Ashkelon, South District, IL
LSports
LSports provides Innovative sports betting data API for the sports betting industry. We are a leading provider of high-quality live sports data feeds.Description
LSports is the leading global provider of sports data, dedicated to revolutionizing the industry through innovative solutions. We excel in sports data collection and analysis, advanced data management, and cutting-edge services like AI-based sports tips and high-quality sports visualization. As the sports data industry continues to grow, LSports remains at the forefront, delivering real-time solutions.
If you're passionate about both sports and technology and want to drive the sports-tech and data industries into the future, we invite you to join the team! We are looking for a highly motivated Senior Data Engineer.
Responsibilities:
- Building a robust data collection and measurement infrastructure for the department and linked products
- Taking ownership of major projects from inception to deployment.
- Implementing a systematic approach for saving and managing scraper data.
- Enabling efficient data access for support teams
- Architecting simple yet flexible solutions and then scaling them as we grow.
- Establishing a strong data framework to support the department and linked products future developments
- Collaborating with cross-functional teams to ensure data integrity, security, and optimal performance across various systems and applications.
- Staying current with emerging technologies and industry trends to recommend and implement innovative solutions that enhance data infrastructure and capabilities.
Requirements
- 5+ years of experience as a Data Engineer.
- 3+ years of experience using Spark.
- Experience building and maintaining production grade data pipelines, and working in distributed architectures.
- Knowledge and understanding of work in a modern CI environment: Git, Docker, K8S.
- Experience with ETL tools: AWS Glue/Apache Airflow/Prefect, etc.
- Experience with big data like Iceberg, ClickHouse or similar.
- Experience with Kafka.
Advantages:
- Experience with designing and implementing big data infrastructure
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS AWS Glue Big Data Data management Data pipelines Docker ETL Excel Git Kafka Kubernetes Pipelines Security Spark
Perks/benefits: Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.