Senior Data Product Performance Engineer - FanDuel, Hybrid & Remote
Cluj-Napoca, Romania
Betfair
We are the largest technology hub of Flutter Entertainment, with over 2,000 people powering the world’s leading sports betting and iGaming brands.About Betfair Romania Development:
Betfair Romania Development is the largest technology hub of Flutter Entertainment, with over 2,000 people powering the world’s leading sports betting and iGaming brands. Exciting, immersive and safe experiences are delivered to over 18 million customers worldwide, from our office in Cluj-Napoca. Driven by relentless innovation and commitment to excellence, we operate our own unbeatable portfolio of diverse proprietary brands such as FanDuel, PokerStars, SportsBet, Betfair, Paddy Power, or Sky Betting & Gaming.
Our Values:
The values we share at Betfair Romania Development define what makes us unique as a team. They empower us by giving meaning to our contributions, and they ensure that we consistently strive for excellence in everything we do. We are looking for passionate individuals who align with our values and are committed to making a difference.
Win together | Raise the bar | Got your back | Own it | Positive impact
About FanDuel:
FanDuel is a leading force in the sports-tech entertainment industry, redefining how fans engage with their favourite sports, teams, and leagues. As the premier gaming destination in North America, FanDuel operates across multiple verticals, including sports betting, daily fantasy sports, online gaming, advance-deposit wagering, and media.
About Our Poker Team From FanDuel:
There is no bigger Poker Platform in the world, than Flutter’s PokerStars. With over 850k hands dealt over every hour, over 1.85 billion tournaments hosted, and 3.4 million players across 140 countries the PokerStars platform continues to grow! FanDuel is looking to expand and grow a Poker team in North America, we’re looking for our next Poker Star – Poker Data Star!
The Role:
As a Senior Data Product Performance Engineer at FanDuel, you will help build pipelines that deliver data for critical products while working closely with key stakeholders.
In this role, you will partner with product managers who help derive and layout our product paths, the data platform and ingestion teams who provide data for all the different data teams, other data engineering teams as we align on data delivery for our integrated products, analysts who make the data mean something, and technical project managers who ensure successful delivery of all new features throughout the multiple data pipelines.
We are looking for a Senior Data Engineer rockstar who is comfortable with different data technologies, has experience building, maintaining, and optimizing data systems, while leading and driving technical solutions. This ideal candidate will also communicate plans, timelines, and strategies to their peers and manager. If this describes you, read on – we want to hear from you!
Key Responsibilities Include:
- Proactively monitor internal channels and system dashboards for reported data or pipeline issues, escalating these issues appropriately, and ensuring timely communication and resolution of production incidents within SLAs
- Providing on-call technical assistance to analysts and other data consumers, responding to inquiries relating to query optimization, reporting anomalies or complex data pipeline issues
- Proactively investigating & resolving data quality issues in response to system alerts or notification from stakeholders
- Designing and implementing batch & real-time data pipelines to the data warehouse or data lake, using data transformation technologies
- Leading impactful initiatives to enhance data pipelines and other data operations capabilities whilst mentoring your juniors and peers
- Creating data tools for analytics, working with stakeholders across all departments to assist with data-related technical issues and supporting their data infrastructure needs
- Identifying & implementing platform configuration changes to address performance and scalability issues
- Designing and implementing visualizations tools for data engineering metrics relating to data ingestion, data quality, data support operations, as well as performance and availability across all data platforms
- Creating and maintaining pipelines and infrastructure documentation, including monitoring and troubleshooting steps and a knowledge base of solutions to recurring issues
- Driving operational best practices and shaping workflows for routine consumer support, operations monitoring, incident management, and problem management
- Proactively identifying workflow automations, optimizing data delivery pipelines, re-designing infrastructure for greater scalability
- Assisting in ensuring compliance of data services and systems with internal/external audits and control practices
Preferred Skillset:
- Strong problem-solving skills with the ability to get to the root causes of issues and implement effective solutions
- Passion for building highly scalable and optimized data pipelines
- Strong background in Python, Java, Scala, or other Object-Oriented Programming languages
- Strong SQL background and understanding of data-warehousing techniques and solutions
- Distributed systems and technologies (PySpark/Spark)
- Implementing data lake, warehouse, and lakehouse architectures leveraging your knowledge of data archiving and retrieval solutions and their relationship to access vs. cost
- Experience with cloud solutions and metrics such as AWS with CloudWatch or Tableau
- Experience with dimensional data modelling
- Experience with real time and batch data ingestion
- Strong understanding of data warehousing and ETL/ELT solutions and optimizations
- Experience with Databricks and Delta Lake
- Understanding of software development lifecycles and the processes to make a software project successful
- Exposure to orchestration and monitoring tools, such as Airflow and Datadog
- Experience owning and coordinating issues across multiple time zones, bringing continuity of coverage to the data org
Desired Skillset:
- Experience with data streaming patterns
- Experience with ETL/ELT tools, such as dbt
- Exposure to database design & performance analysis
- Exposure to data quality metrics & unit testing patterns
- Exposure to Continuous Integration / Continuous Delivery tools, such as BuildKite or Github Actions
Things You Will Get Exposure To:
- Experimenting with new technologies and solving problems optimally
- Working with APIs
- Building, running and maintaining high priority data pipeline solutions
- Data governance and data testing frameworks (e.g., Alation, Great Expectations)
- Continuous integration and delivery of production data products
- An inclusive culture that expects excellence and priorities your growth as an engineer and your well-being as a person
- Mentoring junior engineers
As You Grow At FanDuel, You Will:
- Advance your career within well-defined, skill-based tracks, either as an individual contributor or as a manager – both providing equal opportunities for compensation and advancement
- Apply your experience and intellect as part of an autonomous team with end-to-end ownership of key components of our data architecture
- Serve as a mentor to more junior engineers not only in cultivating craftsmanship but also in achieving operational excellence – system reliability, automation, data quality, and cost-efficiency
Data engineering is a rapidly changing field – most of all, we’re looking for someone who enjoys experimenting, keeping their finger on the pulse of current data engineering tools, and always thinking about how to do something better.
Benefits:
- Hybrid & remote working options
- €1,000 per year for self-development
- Company share scheme
- 25 days of annual leave per year
- 20 days per year to work abroad
- 5 personal days/year
- Flexible benefits: travel, sports, hobbies
- Extended health, dental and travel insurances
- Customized well-being programmes
- Career growth sessions
- Thousands of online courses through Udemy
- A variety of engaging office events
Disclaimer:
We are an inclusive employer. By embracing diverse experiences and perspectives, we create a lasting, positive impact for our employees, customers, and the communities we’re part of. You don't have to meet all the requirements listed to apply for this role. If you need any adjustments to make this role work for you, let us know, and we’ll see how we can accommodate them.
We thank all applicants for their interest; however, only the candidates who best meet the job requirements will be contacted for an interview.
By submitting your application online, you agree that your details will be used to progress your application for employment. If your application is successful, your details will be used to administer your personnel record. If your application is unsuccessful, we will retain your details for a period no longer than three years, to consider you for prospective roles within the company.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture AWS Databricks Data governance DataOps Data pipelines Data quality Data warehouse Data Warehousing dbt Distributed Systems ELT Engineering ETL GitHub Java OOP Pipelines PySpark Python Scala Spark SQL Streaming Tableau Testing
Perks/benefits: Career development Equity / stock options Flex hours Health care Startup environment Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.