Senior Data DevOps Engineer - FanDuel
Cluj-Napoca, Romania
Betfair
We are the largest technology hub of Flutter Entertainment, with over 2,000 people powering the world’s leading sports betting and iGaming brands.FanDuel Group is a world-class team of brands and products all built with one goal in mind — to give fans new and innovative ways to interact with their favorite games, sports, teams, and leagues. That’s no easy task, which is why we’re so dedicated to building a winning team. And make no mistake, we are here to win, but we believe in winning right
Our roster has an opening with your name on it
The Data DevOps Platform Engineering Team at FanDuel is looking for a hardworking and passionate Senior DevOps Engineer to support and enhance Cloud Architecture.
As a Senior DevOps Engineer, you will be working with internal stakeholders, engineers, data scientists, cloud platform engineers and other technologists across the business. The position requires the candidate to have experience in cloud architecture, CI/CD, Infrastructure-as-Code (Iac) and maintaining operational platforms.
THE GAME PLAN
Everyone on our team has a part to play.
Responsibilities :
Design and manage cloud infrastructure with terraform with a focus on scalability, resilience, and security.
Provide technical guidance to the DevOps team, fostering a culture of continual learning.
Enhance infrastructure performance, monitor operations, and tackle bottlenecks proactively.
Set up and maintain CI/CD pipelines, implementing automated testing and release processes.
Explore and recommend tools and technologies to improve the DevOps workflow.
Build, maintain, optimize, and scale FanDuel's data infrastructure.
Improve operations and processes through automation and advanced tooling.
Partner with data engineers, promoting best practices and efficient infrastructure use.
Plan for the growth of FanDuel's data infrastructure, preparing for future challenges.
What we’re looking for in our next teammate :
Strong experience working on cloud platforms, focusing on security, performance, and cost.
Exceptional problem-solving and communication skills, facilitating effective collaboration with cross-functional teams.
Mastery in infrastructure automation using terraform with a deep understanding of its capabilities.
Strong understanding of cloud platforms (AWS preferred) and effective cloud resource management.
Excellent scripting and automation skills using languages like Python, Bash, or PowerShell.
In-depth understanding of containerization technologies like Docker and orchestration tools such as Kubernetes.
Solid grasp of networking concepts and security principles
Experience with GIT development, deployment, and support of data processes and procedures.
Possess a high-level understanding of batch processing systems and stream processing systems.
Solid experience with tools for performance monitoring and troubleshooting.
Proven experience and expertise in Cloud operational monitoring and administration to provide overall system visibility and observability with Datadog, CloudWatch and similar tools.
Desired Characteristics:
A successful candidate has technical depth and hands-on implementation experience of various practices and tools in the DevOps toolchain and working knowledge of data technologies like Airflow, EMR, Spark, HDFS.
The Senior DevOps Engineer is comfortable rolling up their sleeves to design and code modules for infrastructure, application, and processes.
A system engineering or developer background with the ability to learn quickly and share your knowledge with the broader team
A mindset of automate everything, with experience demonstrating this.
Superior communication and collaboration skills, proven by a history of successful cross team initiatives.
Desired Technology Experience:
3-5 years Linux experience with Python, Perl or Shell scripting
3-5 years of AWS (IAM, S3, KMS, Cloud Formation, VPC, Lambda, Security Groups, SNS, RDS, EMR)
3-5 years of GitHub, CI/CD - Buildkite, Jenkins
3-5 years of Configuration Management tools - Puppet or similar (Ansible, Chef)
2-4 years of Infrastructure as Code (Terraform, CloudFormation) experience.
2-3 years of Docker, Container Orchestration (Kubernetes, EKS, ECS), Helm package manager.
1-2 years Apache Airflow, Databricks, Apache Spark, dbT , Kafka
Nice to haves :
1-3 years of Messaging – SQS, Kafka, RabbitMQ or similar technologies
1-3 years of DataDog, Cloudwatch monitoring solutions experience.
What you can expect:
- 25 days of annual leave
- ShareSave scheme and „Flexible Benefits” of your choice
- Private health insurance (includes dental insurance and health assessments)
- Excellent development opportunities including thousands of courses online through ‘Udemy'
- Working from home options
We thank all applicants for their interest, however only the suitable candidates will be contacted for an interview. By submitting your application online, you agree that: your details will be used to progress your application for employment. If your application is successful, your details will be used to administer your personnel record. If your application is unsuccessful, we will retain your details for a period no longer than two years, in order to consider you for prospective roles within our company.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Ansible Architecture AWS CI/CD CloudFormation Databricks dbt DevOps Docker ECS Engineering Git GitHub HDFS Helm Jenkins Kafka Kubernetes Lambda Linux Perl Pipelines Puppet Python RabbitMQ Security Shell scripting Spark Terraform Testing
Perks/benefits: Career development Equity / stock options Flex hours Health care Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.