GCP Big Query Engineer

Bengaluru, KA, India

PradeepIT

PradeepIT, supported by Asia's largest tech professional network, revolutionizing global talent acquisition. Discover the potential of hiring top Asian tech talents at ten times the speed, starting today!

View all jobs at PradeepIT

Apply now Apply later

Job Location

Bangalore, Karnataka, India

Job Description

We are looking for an analytical, big-picture thinker who is driven to enhance and further the mission of Tredence by delivering technology to internal business and functional stakeholders. You will serve as a leader to drive the IT strategy to create value across the organization. This Data Engineer will be empowered to lead the engagement to focus on implementing both low-level, innovative solutions, as well as the day-to-day tactics that drive efficiency, effectiveness, and value

You will play a critical role in creating and analyzing deliverables to provide critical content to enable fact-based decision-making, facilitation, and achievement of successful collaboration with the business stakeholders. You will analyze, design, and develop best practices for business changes through technology solutions.

Technical Requirements

  • Have Implemented and Architected solutions on the Google Cloud Platform using the components of GCP
  • Experience with Apache Beam/Google Dataflow/Apache Spark in creating end-to-end data pipelines.
  • Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, and Machine Learning.
  • Experience programming in Java, Python, etc.
  • Expertise in at least two of these technologies: Relational Databases, Analytical Databases, and NoSQL databases.
  • Certified in Google Professional Data Engineer/ Solution Architect is a major Advantage

Roles & Responsibilities

Experience

  • 6-8 years experience in IT or professional services experience in IT delivery or large-scale IT analytics projects
  • Candidates must have expertise and knowledge of the Google Cloud Platform; the other cloud platforms are nice to have.
  • Expert knowledge in SQL development.
  • Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc).
  • Experience with Apache Beam/Google Dataflow/Apache Spark in creating end-to-end data pipelines.
  • Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, and Machine Learning.
  • Experience programming in Java, Python, etc.
  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory, etc.)
  • Implement data pipelines to automate the ingestion, transformation, and augmentation of data sources, and provide best practices for pipeline operations.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: BigQuery Bigtable Dataflow Data pipelines Data quality GCP Google Cloud Hadoop Java Machine Learning NoSQL Pipelines Python RDBMS Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this