Big Data & GCP- Senior Software Engineer

Bengaluru, KA, India

PradeepIT

PradeepIT, supported by Asia's largest tech professional network, revolutionizing global talent acquisition. Discover the potential of hiring top Asian tech talents at ten times the speed, starting today!

View all jobs at PradeepIT

Apply now Apply later

About the job

Accelerate your career with PradeepIT

PradeepIT is one of the largest, globally recognized IT Consulting firm to connect Indias deeply vetted talent team to global customer.

Were headquartered in Bengaluru, Silicon Valley of India. PradeepITs customers include SAP Lab, Bosch, Rolls-Royce, Daikin, Daimler and J&J and hundreds of other Fortune 500 companies and fast-growing startups.

With continuous hard work and working remotely by choice, PradeepIT is certified as a Great Place to Work! Trusted by leading brands and fortune 500 companies from around the world, we have achieved:

6+ Years of Experience

580+ Open source technology Consultant

120+ SAP Consultant

40+ Salesforce Consultant

60+ Adobe Consultant

100+ Mobility Consultant

890+ Clients in APAC, EMEA & USA

Our Beliefs

PradeepIT believes in connecting people across the globe and provide them an opportunity work on remotely. Being a people-first organization, PradeepIT constantly strives for individuals who won't just keep up, but break new ground, work with cutting edge technology and ramp-up their skills with course created by our Vertical Heads, Senior Architect for freely with help of PradeepIT Academy.

Responsibilities

  • Design, developing and maintaining the data architecture, data models and standards for various Data Integration & Data Warehousing projects in GCP cloud, combined with other technologies
  • Ensure the use of Big Query SQL, Java/Python/Scala and Spark reduces lead time to delivery and aligns to overall group strategic direction so that cross-functional development is usable
  • Ownership of technical solutions from design and architecture perspective, ensure the right direction and propose resolution to potential data pipeline-related problems.
  • Expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
  • Provide technical guidance and support to a vibrant engineering team. Coaching and teaching your teammates how to do great data engineering.
  • A deep understanding of data architecture principles and data warehouse methodologies specifically Kimball or Data Vault.

Requirements

  • An expert in GCP, with at least 5-7 years of delivery experience with: Dataproc, Dataflow, Big Query, Compute, Pub/Sub, and Cloud Storage
  • Highly knowledgeable in industry best practices for ETL Design, Principles, and Concepts
  • Equipped with 3 years of experience with programming languages Python
  • A DevOps and Agile engineering practitioner with experience in a test-driven development
  • Experienced in the following technologies: Google Cloud Platform, Dataproc, Dataflow, Spark SQL, Big Query SQL, PySpark and Python/Scala
  • Experienced in the following BigData technologies: Spark, Hadoop, Kafka etc..

Technologies

  • Big Data
  • Spark
  • Python/Scala
  • GCP
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Architecture Big Data BigQuery Consulting Consulting firm Dataflow Dataproc Data warehouse Data Warehousing DevOps Engineering ETL GCP Google Cloud Hadoop Java Kafka Open Source PySpark Python Salesforce Scala Spark SQL TDD Teaching

Perks/benefits: Career development

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this