Technical Lead

DGS India - Bengaluru - Manyata N1 Block

dentsu

At dentsu, innovation is our strength, and your growth is our mission. We help you keep up with technological changes in the digital economy.

View all jobs at dentsu

Apply now Apply later

The purpose of this role is to maintain, improve, clean and manipulate data in the business’s operational and analytics databases. The Data Engineer works with the business’s software engineers, data analytics teams, data scientists and data warehouse engineers in order to understand and aid in the implementation of database requirements, analyse performance, and troubleshoot any existent issues.

Job Description:

Key responsibilities:
Creates and maintains optimal data pipeline architecture
Assembles large, complex data sets that meet functional / non-functional business requirements
Identifies, designs and implements internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc
Builds analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Works with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
Keeps our data separated and secure
Creates data tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader
Works with data and analytics experts to strive for greater functionality in our data systems

Must have:
- Data Architecture & Modeling: Design and maintain scalable, efficient data models and architectures to support data analytics, reporting, and ML model training.
- Data Pipeline Engineering: Develop, maintain, and optimise scalable data pipelines that can handle large volumes and various types of data.
- Data Quality Assurance: Implement rigorous data cleaning, transformation, and integration processes to ensure data quality and consistency.
- Collaboration: Work closely with data scientists, ML engineers, and other stakeholders to understand data requirements and implement effective data solutions.
- Documentation & Governance: Maintain comprehensive documentation of data procedures, systems, and architectures. Provide guidance and support for data governance practices, including metadata management, data lineage, and data cataloging.
- ML Familiarity: Familiarity with machine learning concepts and tools.
- Technical Skills:
       * Strong proficiency in Python, with an emphasis on clean, modular, and well-documented code.
       * Proficient in Spark (PySpark and SparkSQL).
       * Expertise in SQL, JIRA, Git, and GitHub.
- Good Communication Skills: Able to explain complex technical concepts clearly and concisely to both technical and non-technical audiences.
Good to have:
- Azure Cloud Expertise: Hands-on experience with designing and implementing scalable and secure data processing pipelines using Azure cloud services and tools like Databricks or Azure Synapse Analytics.
- Azure Data Management: Experience managing and optimizing data storage within Azure using services like Azure SQL Data Warehouse and Azure Cosmos DB.
- ML Experience: Experience in deploying and maintaining ML models in production environments.

Location:

DGS India - Bengaluru - Manyata N1 Block

Brand:

Merkle

Time Type:

Full time

Contract Type:

Permanent
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Leadership Jobs

Tags: Architecture Azure Cosmos DB Data Analytics Databricks Data governance Data management Data pipelines Data quality Data warehouse Engineering Git GitHub Jira Machine Learning ML models Model training Pipelines PySpark Python Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this