Azure Data Engineer - Associate Consultant - Hyderabad

Hyderabad, Telangana, India

KPMG India

Welcome to KPMG International.

View all jobs at KPMG India

Apply now Apply later

Posting Type: 

 

Specific Job Title:Data Engineer

Cost Center: Data & Tech

Area of interest: Data Integration

 

Data Engineer – Azure

Level: - Associate Consultant

Location: - Hyderabad

Key Technologies:

  • 2–4 years of experience in data engineering or a related field.
  • Proficiency in Azure Data Factory (ADF) for building pipelines and workflows.
  • Hands-on experience with Azure Data Lake Storage (ADLS) for managing data lakes.
  • Strong working knowledge of Databricks and PySpark.
  • Basic understanding of Python for scripting and automation.
  • Experience in designing and maintaining ETL/ELT workflows.
  • Familiarity with CI/CD pipelines and version control systems like Git.
  • Good problem-solving and communication skills.

Key Responsibilities:

  • Develop and maintain scalable data pipelines using Azure Data Factory (ADF) and Databricks.
  • Perform data extraction, transformation, and loading (ETL/ELT) from various sources into Azure Data Lake Storage (ADLS).
  • Implement data processing workflows using Databricks and PySpark for structured and unstructured data.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Ensure data accuracy, consistency, and security across all stages of the data lifecycle.
  • Write clean and efficient Python scripts for data manipulation and workflow automation.
  • Monitor and optimize pipeline performance and troubleshoot issues as they arise.
  • Stay updated with Azure and data engineering best practices to recommend and implement improvements.

Roles:

  • Client Support: Provide expert support and troubleshooting for customers’ applications, ETL’s and data integration resolving issues in a timely manner to ensure minimal disruption to client operations.

 

  • System Monitoring: Monitor client environments for performance, stability, and security, implementing proactive measures to prevent potential issues.

 

  • Configuration and Optimization: Assist clients with the configuration and optimization of their customers’ applications, integration systems to align with business requirements and improve efficiency.

 

  • Documentation and Reporting: Maintain accurate documentation of client environments, issues resolved, and changes made. Provide regular reports to clients on system performance and areas for improvement.

 

  • Training and Knowledge Transfer: Deliver training sessions and knowledge transfer to client teams, empowering them to effectively use and manage their customers’ applications and integration systems.

 

Keywords:

Azure Data Factory (ADF), Data Lake (ADLS), Data bricks, Pipelines, Python, CI/CD


Qualification

  • Any bachelor’s or master’s Degree



 

Work Location

Hyderabad

Posting Type: 

 

Specific Job Title:Data Engineer

Cost Center: Data & Tech

Area of interest: Data Integration

 

Data Engineer – Azure

Level: - Associate Consultant

Location: - Hyderabad

Key Technologies:

  • 2–4 years of experience in data engineering or a related field.
  • Proficiency in Azure Data Factory (ADF) for building pipelines and workflows.
  • Hands-on experience with Azure Data Lake Storage (ADLS) for managing data lakes.
  • Strong working knowledge of Databricks and PySpark.
  • Basic understanding of Python for scripting and automation.
  • Experience in designing and maintaining ETL/ELT workflows.
  • Familiarity with CI/CD pipelines and version control systems like Git.
  • Good problem-solving and communication skills.

Key Responsibilities:

  • Develop and maintain scalable data pipelines using Azure Data Factory (ADF) and Databricks.
  • Perform data extraction, transformation, and loading (ETL/ELT) from various sources into Azure Data Lake Storage (ADLS).
  • Implement data processing workflows using Databricks and PySpark for structured and unstructured data.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Ensure data accuracy, consistency, and security across all stages of the data lifecycle.
  • Write clean and efficient Python scripts for data manipulation and workflow automation.
  • Monitor and optimize pipeline performance and troubleshoot issues as they arise.
  • Stay updated with Azure and data engineering best practices to recommend and implement improvements.

Roles:

  • Client Support: Provide expert support and troubleshooting for customers’ applications, ETL’s and data integration resolving issues in a timely manner to ensure minimal disruption to client operations.

 

  • System Monitoring: Monitor client environments for performance, stability, and security, implementing proactive measures to prevent potential issues.

 

  • Configuration and Optimization: Assist clients with the configuration and optimization of their customers’ applications, integration systems to align with business requirements and improve efficiency.

 

  • Documentation and Reporting: Maintain accurate documentation of client environments, issues resolved, and changes made. Provide regular reports to clients on system performance and areas for improvement.

 

  • Training and Knowledge Transfer: Deliver training sessions and knowledge transfer to client teams, empowering them to effectively use and manage their customers’ applications and integration systems.

 

Keywords:

Azure Data Factory (ADF), Data Lake (ADLS), Data bricks, Pipelines, Python, CI/CD


Qualification

  • Any bachelor’s or master’s Degree



 

Work Location

Hyderabad

Posting Type: 

 

Specific Job Title:Data Engineer

Cost Center: Data & Tech

Area of interest: Data Integration

 

Data Engineer – Azure

Level: - Associate Consultant

Location: - Hyderabad

Key Technologies:

  • 2–4 years of experience in data engineering or a related field.
  • Proficiency in Azure Data Factory (ADF) for building pipelines and workflows.
  • Hands-on experience with Azure Data Lake Storage (ADLS) for managing data lakes.
  • Strong working knowledge of Databricks and PySpark.
  • Basic understanding of Python for scripting and automation.
  • Experience in designing and maintaining ETL/ELT workflows.
  • Familiarity with CI/CD pipelines and version control systems like Git.
  • Good problem-solving and communication skills.

Key Responsibilities:

  • Develop and maintain scalable data pipelines using Azure Data Factory (ADF) and Databricks.
  • Perform data extraction, transformation, and loading (ETL/ELT) from various sources into Azure Data Lake Storage (ADLS).
  • Implement data processing workflows using Databricks and PySpark for structured and unstructured data.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Ensure data accuracy, consistency, and security across all stages of the data lifecycle.
  • Write clean and efficient Python scripts for data manipulation and workflow automation.
  • Monitor and optimize pipeline performance and troubleshoot issues as they arise.
  • Stay updated with Azure and data engineering best practices to recommend and implement improvements.

Roles:

  • Client Support: Provide expert support and troubleshooting for customers’ applications, ETL’s and data integration resolving issues in a timely manner to ensure minimal disruption to client operations.

 

  • System Monitoring: Monitor client environments for performance, stability, and security, implementing proactive measures to prevent potential issues.

 

  • Configuration and Optimization: Assist clients with the configuration and optimization of their customers’ applications, integration systems to align with business requirements and improve efficiency.

 

  • Documentation and Reporting: Maintain accurate documentation of client environments, issues resolved, and changes made. Provide regular reports to clients on system performance and areas for improvement.

 

  • Training and Knowledge Transfer: Deliver training sessions and knowledge transfer to client teams, empowering them to effectively use and manage their customers’ applications and integration systems.

 

Keywords:

Azure Data Factory (ADF), Data Lake (ADLS), Data bricks, Pipelines, Python, CI/CD


Qualification

  • Any bachelor’s or master’s Degree



 

Work Location

Hyderabad

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Azure CI/CD Databricks Data pipelines ELT Engineering ETL Git Pipelines PySpark Python Security Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this