Principal Data Engineer (MTS4)
Bangalore, India
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Applications have closed
Nielsen
A global leader in audience insights, data and analytics, Nielsen shapes the future of media with accurate measurement of what people listen to and watch.
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future.
As a Principal Data Engineer(MTS 4), you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on the company’s data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization.
As a Principal Data Engineer(MTS 4), you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on the company’s data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization.
Key Responsibilities
- Architect & Define Scope
- Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity.
- Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones.
- Technology Leadership & Influence
- Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.).
- Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability.
- Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities.
- Execution & Delivery
- Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities.
- Deconstruct complex architectures into simpler components that can be executed by various teams in parallel.
- Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions.
- Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning.
- Impact & Technical Complexity
- Shape how the organization operates by introducing innovative data solutions and strategic technical direction.
- Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions.
- Continuously balance short-term business needs with long-term architectural vision.
- Process Improvement & Best Practices
- Set and enforce engineering standards that elevate quality and productivity across multiple teams.
- Lead by example in code reviews, automation, CI/CD practices, and documentation.
- Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience).
- 6+ years of software/data engineering experience, with significant exposure to large-scale distributed systems.
- Technical Expertise:
- Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java).
- Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc.
- Proven track record architecting and delivering highly scalable data infrastructure solutions.
- Leadership & Communication:
- Ability to navigate and bring clarity in ambiguous situations.
- Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders.
- Experience coaching and mentoring senior engineers.
- Problem-Solving:
- History of tackling complex, ambiguous data challenges and delivering tangible results.
- Comfort making informed trade-offs between opportunity vs. architectural complexity"
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
1
0
0
Category:
Engineering Jobs
Tags: Airflow Architecture CI/CD Computer Science Data pipelines Data quality Distributed Systems Engineering Flink Hadoop Java Kafka Kubernetes Pipelines Python Scala Snowflake Spark
Region:
Asia/Pacific
Country:
India
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Sr. Data Engineer jobsData Scientist II jobsBI Developer jobsPrincipal Data Engineer jobsBusiness Intelligence Developer jobsStaff Data Scientist jobsStaff Machine Learning Engineer jobsPrincipal Software Engineer jobsJunior Data Analyst jobsData Science Intern jobsDevOps Engineer jobsData Manager jobsSoftware Engineer II jobsData Science Manager jobsStaff Software Engineer jobsAI/ML Engineer jobsData Analyst Intern jobsLead Data Analyst jobsBusiness Data Analyst jobsSr. Data Scientist jobsData Specialist jobsData Engineer III jobsBusiness Intelligence Analyst jobsSenior Backend Engineer jobsData Governance Analyst jobs
Consulting jobsMLOps jobsAirflow jobsOpen Source jobsEconomics jobsLinux jobsKafka jobsKPIs jobsGitHub jobsTerraform jobsJavaScript jobsPostgreSQL jobsPrompt engineering jobsRDBMS jobsBanking jobsNoSQL jobsComputer Vision jobsClassification jobsStreaming jobsData Warehousing jobsRAG jobsScikit-learn jobsPhysics jobsGoogle Cloud jobsdbt jobs
Hadoop jobsGPT jobsPandas jobsLooker jobsBigQuery jobsData warehouse jobsScala jobsLangChain jobsReact jobsR&D jobsDistributed Systems jobsOracle jobsMicroservices jobsScrum jobsELT jobsCX jobsPySpark jobsIndustrial jobsOpenAI jobsRedshift jobsTypeScript jobsJira jobsSAS jobsModel training jobsRobotics jobs