Data Engineer III
Bengaluru, Karnataka, India
Amagi
Channel creation, content distribution, and CTV advertising solutions for FAST, OTT, and broadcast TV in one convenient platform.About Amagi
We are a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute, and monetize live linear channels on Free Ad-supported Streaming TV and video services platforms. Amagi also offers 24x7 cloud-managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 700+ content brands, 800+ playout chains, and over 2500 channel deliveries on its platform in over 40 countries. Amagi has a presence in New York, Los Angeles, Toronto, London, Paris, Melbourne, Seoul, Singapore, and broadcast operations in New Delhi, and an innovation centre in Bangalore.
For more information visit us at www.amagi.com
Amagi Monetise
Amagi Monetise group focuses on building products that help in monetisation for our customers in different streaming segments – FAST (Free Ad-supported Streaming TV), VoD (Video on Demand) and Live Events. This group consists of various products like
Amagi Data Platform is the central data platform for Amagi and enables various use cases like Analytics, ML, and offers critical insights across content, advertising, billing etc. to the customers. It is a highly scalable platform which ingests multiple TBs of data per day and makes them available to the end user in near real time.
Team
The team is responsible to build the New Dataplatform from scratch to enrich the Amagi product portfolio to enable customers with the highly informative data analytics of the streaming information of their channel, platform and deliveries across regions and devices. An Insightful dashboard to showcase the trending analytics of various metrics across channel viewership, content analytics and Ads for both linear as well as VOD channels which is made possible through crunching millions of viewership hours from TBs of viewers heartbeat log. Create efficient, cost effective, scalable and manageable data pipelines to build strongly typed data models to quickly serve millions of data points to the viewport.
Role reporting into: Director, Data
Location: Bangalore, India
Key Responsibilities:
- Take complete ownership and accountability of feature requirements from conception till delivery and continue to manage, sustain and optimize the system.
- Build, deploy and maintain a highly scalable data pipeline framework which enables our developers to build multiple data pipelines from different kinds of data sources.
- Collaborate with the product, business, design and engineering functions to be on top of your team’s deliverables & milestones.
- Timely delivery of highly reliable and scalable engineering architecture, and high quality, maintainable and operationally excellent code for your team.
- Lead design discussions and code reviews.
- Set up best practices, gatekeeper, guidelines and standards in the team.
- Identify and resolve performance and scalability issues.
Requirements
Must haves
- Bachelor’s/master’s degree in Computer Science with 6+ years of overall experience
- Excellent technical skills and communication skills to mentor the engineers under you.
- Data platform knowledge from ingestion, processing, warehousing and deep expertise in any of the area
- Deep understanding of ETL frameworks eg. Spark, or equivalent systems.
- Deep understanding of OLAP systems and data modeling like star and snowflake schema.
- Deep understanding of at least one of ETL technologies like Airflow, Databricks, Trino, Presto, Hive.
- Building observability with technologies like logging, datadog, prometheus, sentry, grafana, splunk, EKS etc.
- Strong Experience in Scala or Python
- Strong knowledge in public clouds (AWS, GCP etc.) is preferred.
- Must have experience: Technical leadership roles of 2+ years
- Databricks knowledge is good to have
- Experience in architecting, designing and building scalable big data analytics pipelines ingesting TB of data per day.
- Experience in optimizing spark, sql, datapipeline and troubleshooting.
- Strong debugging skills to find RCA, workaround, resolution, long and short term mitigation.
- Strong experience in Agile development methodologies to plan, break down, estimate feature requirements in EPIC, stories and subtask.
- Delegate work to the team and unblock them.Raise and mitigate risk.
- Code and test strategy reviews. Set up strong practice in the team to follow coding standards and testing.
- Ability to work independently and with cross team dependencies. Ability to build relationships and do conflict resolution.
- Good to have (preferably): At least 2+ years of experience in Ad tech or media or streaming.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow Architecture AWS Big Data Computer Science Data Analytics Databricks Data pipelines Engineering ETL GCP Grafana Machine Learning OLAP Pipelines Python Scala Snowflake Spark Splunk SQL Streaming Testing
Perks/benefits: Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.