Senior Data Engineer
Bangalore, India
Nielsen
A global leader in audience insights, data and analytics, Nielsen shapes the future of media with accurate measurement of what people listen to and watch.
ABOUT NIELSEN
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future.
JOB SUMMARYWe are looking for self-motivated, creative thinkers, people that are flexible and enjoy working in teams. At Nielsen we develop creative analytical solutions to support companies in the optimization of their marketing and communications budgets. Analytical and econometric approaches form the backbone of these solutions, answering questions like "What media mix works best to communicate my brand message?, What budget should I invest in media and marketing?". Our software tools and other decision support solutions combine market research, data, modeling results and technical business intelligence.
Our dedicated data engineering team assumes full responsibility for the entire lifecycle of our “data processing platform.” This transformative platform empowers our colleagues from diverse departments, even those lacking extensive technical expertise, to independently construct and manage end-to-end Airflow data processing pipelines. Utilizing the scalability of Spark clusters on AWS, the platform offers an intuitive, virtually no-code interface that enables users to effortlessly assemble, transform, and serialize data. This capability is paramount for supporting our sophisticated media planning solutions, democratizing data access and manipulation, and substantially expediting the delivery of valuable insights without the conventional reliance on coding proficiency. The team has an open culture, works in an agile style and in close cooperation with software developers and colleagues from other disciplines, such as data scientists and client-facing solution managers. You will have the opportunity to develop yourself in areas like big data, cloud computing, data lake architecture and data orchestration.
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future.
JOB SUMMARYWe are looking for self-motivated, creative thinkers, people that are flexible and enjoy working in teams. At Nielsen we develop creative analytical solutions to support companies in the optimization of their marketing and communications budgets. Analytical and econometric approaches form the backbone of these solutions, answering questions like "What media mix works best to communicate my brand message?, What budget should I invest in media and marketing?". Our software tools and other decision support solutions combine market research, data, modeling results and technical business intelligence.
ROLES & RESPONSIBILITIES
- Assume full responsibility for the entire lifecycle of the "data processing platform."
- Maintain and enhance a transformative platform that empowers non-technical colleagues to independently build and manage end-to-end Airflow data processing pipelines.
- Utilize the scalability of Spark clusters on AWS to ensure platform performance and reliability.
- Maintain and improve an intuitive, virtually no-code interface for data assembly, transformation, and serialization.
- Support sophisticated media planning solutions through the data processing platform.
- Democratize data access and manipulation across diverse departments.
- Expedite the delivery of valuable insights by enabling self-service data processing.
- Work in an agile environment, collaborating closely with software developers, data scientists, and client-facing solution managers.
- Adhere to software engineering best practices in all aspects of platform development and maintenance.
- Utilize proficiency in distributed computational engines such as Spark, Presto, and Trino for data processing tasks.
- Work with cloud platforms, particularly AWS, and potentially Azure and Google Cloud Platform (GCP).
- Employ containerization technologies (Docker, Kubernetes, etc.) for deployment and management of platform components.
- Implement and maintain continuous integration pipelines for platform updates and deployments.
- Interact effectively with Data Science teams to understand their data processing needs and requirements.
- Design and implement scalable data processing pipelines capable of efficiently handling data volumes from megabytes to terabytes while consistently applying business rules.
- Continuously learn and develop skills in areas such as big data, cloud computing, data lake architecture, and data orchestration.
- Collaborate effectively as a team player, demonstrating flexibility, a proactive approach, and pragmatism.
- Apply excellent problem-solving and analytical skills to address technical challenges.
- Communicate effectively and collaborate with team members and stakeholders from various disciplines.
QUALIFICATIONS
- 5+ years of experience in data engineering.
- Bachelor or master’s degree in Computer Science Data Engineering, or proven experience in the field;
- Knowledge of and experience with software engineering best practices is crucial.
- Proficiency in distributed computational engines such as Spark, Presto, and Trino.
- Familiarity with cloud platforms like AWS, Azure, and Google Cloud Platform (GCP).Experience with containerization technologies (Docker, Kubernetes, etc.).
- Knowledge of continuous integration pipelines.
- Ability to interact with Data science teams as key stakeholders.
- Capability to design scalable data processing pipelines that can efficiently handle data volumes ranging from a few megabytes to terabytes, while maintaining consistency in applying the same business rules.
- Fast learner
- Team player with flexible, proactive and pragmatic attitude;
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Our dedicated data engineering team assumes full responsibility for the entire lifecycle of our “data processing platform.” This transformative platform empowers our colleagues from diverse departments, even those lacking extensive technical expertise, to independently construct and manage end-to-end Airflow data processing pipelines. Utilizing the scalability of Spark clusters on AWS, the platform offers an intuitive, virtually no-code interface that enables users to effortlessly assemble, transform, and serialize data. This capability is paramount for supporting our sophisticated media planning solutions, democratizing data access and manipulation, and substantially expediting the delivery of valuable insights without the conventional reliance on coding proficiency. The team has an open culture, works in an agile style and in close cooperation with software developers and colleagues from other disciplines, such as data scientists and client-facing solution managers. You will have the opportunity to develop yourself in areas like big data, cloud computing, data lake architecture and data orchestration.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
0
0
0
Category:
Engineering Jobs
Tags: Agile Airflow Architecture AWS Azure Big Data Business Intelligence Computer Science Docker Engineering GCP Google Cloud Kubernetes Market research Pipelines Research Spark
Perks/benefits: Career development Flex hours
Region:
Asia/Pacific
Country:
India
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Data Scientist II jobsData Engineer II jobsSr. Data Engineer jobsStaff Data Scientist jobsPrincipal Data Engineer jobsBusiness Intelligence Analyst jobsStaff Machine Learning Engineer jobsData Science Manager jobsData Manager jobsData Science Intern jobsPrincipal Software Engineer jobsJunior Data Analyst jobsBusiness Data Analyst jobsSoftware Engineer II jobsDevOps Engineer jobsData Specialist jobsData Analyst Intern jobsLead Data Analyst jobsSr. Data Scientist jobsStaff Software Engineer jobsResearch Scientist jobsAI/ML Engineer jobsData Engineer III jobsSenior Backend Engineer jobsBI Analyst jobs
NLP jobsAirflow jobsOpen Source jobsEconomics jobsKafka jobsLinux jobsMLOps jobsKPIs jobsTerraform jobsNoSQL jobsJavaScript jobsComputer Vision jobsGoogle Cloud jobsPhysics jobsData Warehousing jobsRDBMS jobsPostgreSQL jobsScikit-learn jobsBanking jobsGitHub jobsScala jobsHadoop jobsData warehouse jobsStreaming jobsPandas jobs
R&D jobsOracle jobsBigQuery jobsdbt jobsClassification jobsCX jobsDistributed Systems jobsLooker jobsPySpark jobsReact jobsScrum jobsRAG jobsRobotics jobsRedshift jobsELT jobsJira jobsMicroservices jobsIndustrial jobsGPT jobsPrompt engineering jobsSAS jobsData Mining jobsData strategy jobsNumPy jobsMySQL jobs