GCP Data Engineer - Associate Consultant

Bangalore, Karnataka, India

KPMG India

Welcome to KPMG International.

View all jobs at KPMG India

Apply now Apply later

Mandatory Skills

 

  • Completed undergraduate degree with outstanding academic credentials (preferably a technical undergrad degree e.g. Computer Science, Engineering)Relevant work experience between 2 to 4 Years.
  • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery, Cloud Functions, Composer, GCS
  • Proficient hands-on programming experience in Spark/Scala (python/java)
  • Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
  • Data Engineering knowledge (such as Data Lake, Data warehouse - Redshift/Hive/Snowflake, Integration, Migration) 
  • Excellent communicator (written and verbal formal and informal)
  • Experience using software version control tools (Git/Bitbucket/code commit).
  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
  • Ability to multi-task under pressure and work independently with minimal supervision.
  • Must be a team player and enjoy working in a cooperative and collaborative team environment.

 

 

Primary Roles and Responsibilities 

 

An GCP Data Engineer is responsible for designing, building, and maintaining the data infrastructure for an organization using Azure cloud services. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The GCP Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. 

Preferred Skills

  • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery, Cloud Functions, Composer, GCS
  • Proficient hands-on programming experience in Spark/Scala (python/java)
  • Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
  • Data Engineering knowledge (such as Data Lake, Data warehouse - Redshift/Hive/Snowflake, Integration, Migration) 
  • Excellent communicator (written and verbal formal and informal)
  • Experience using software version control tools (Git/Bitbucket/code commit).
  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
  • Ability to multi-task under pressure and work independently with minimal supervision.
  • Must be a team player and enjoy working in a cooperative and collaborative team environment.
  • GCP Certification preferred.
  • Additional Cloud experience in AWS or Azure preferred.
  • Completed undergraduate degree with outstanding academic credentials (preferably a technical undergrad degree e.g. Computer Science, Engineering)
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: AWS Azure BigQuery Bitbucket Computer Science Data pipelines Data warehouse ELT Engineering ETL GCP Git Java Pipelines Privacy Python Redshift Scala Security Snowflake Spark

Perks/benefits: Flex hours

Region: Asia/Pacific
Country: India

More jobs like this