Lead Data Engineer - R01553054

Bangalore, Karnataka, India

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Brillio

From data ingestion and transformation to advanced analytics and visualization, we provide end-to-end solutions to help you drive business growth.

View all jobs at Brillio

Apply now Apply later

Lead Data Engineer

Primary Skills

  • IICS, Alation, Data Modelling Fundamentals, Data Warehousing, ETL Fundamentals, Modern Data Platform Fundamentals, PLSQL, T-SQL, Stored Procedures, Python, SQL, SQL (Basic + Advanced), Talend

Job requirements

  • About the Role
  • We are seeking a Senior Data Engineer with deep expertise in Google Cloud Platform (GCP) and BigQuery to lead cloud modernization initiatives, develop scalable data pipelines, and enable real-time data processing for enterprise-level systems. This is a high-impact role focused on driving the transformation of legacy infrastructure into a robust, cloud-native data ecosystem.

  • Key Responsibilities
  • 1. Data Migration & Cloud Modernization
  • Analyze legacy on-premises and hybrid cloud data warehouse environments (e.g., SQL Server).
  • Lead the migration of large-scale datasets to Google BigQuery.
  • Design and implement data migration strategies ensuring data quality, integrity, and performance.
  • 2. Data Integration & Streaming
  • Integrate data from various structured and unstructured sources, including APIs, relational databases, and IoT devices.
  • Build real-time streaming pipelines for large-scale ingestion and processing of IoT and telemetry data.
  • 3. ETL / Data Pipeline Development
  • Modernize and refactor legacy SSIS packages into cloud-native ETL pipelines.
  • Develop scalable, reliable workflows using Apache Airflow, Python, Spark, and GCP-native tools.
  • Ensure high-performance data transformation and loading into BigQuery for analytical use cases.
  • 4. Programming & Query Optimization
  • Write and optimize complex SQL queries, stored procedures, and scheduled jobs within BigQuery.
  • Develop modular, reusable transformation scripts using Python, Java, Spark, and SQL.
  • Continuously monitor and optimize query performance and cost efficiency in the cloud data environment.

  • Required Skills & Experience
  • 5+ years in Data Engineering with a strong focus on cloud and big data technologies.
  • Minimum 2+ years of hands-on experience with GCP, specifically BigQuery.
  • Proven experience migrating on-premise data systems to the cloud.
  • Strong development experience with Apache Airflow, Python, and Apache Spark.
  • Expertise in streaming data ingestion, particularly in IoT or sensor data environments.
  • Strong SQL development skills; experience with BigQuery performance tuning.
  • Solid understanding of cloud architecture, data modeling, and data warehouse design.
  • Familiarity with Git and CI/CD practices for managing data pipelines.

  • Preferred Qualifications
  • GCP Professional Data Engineer certification.
  • Experience with modern data stack tools like dbt, Kafka, or Terraform.
  • Exposure to ML pipelines, analytics engineering, or DataOps/DevOps methodologies.

  • Why Join Us?
  • Work with cutting-edge technologies in a fast-paced, collaborative environment.
  • Lead cloud transformation initiatives at scale.



Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Airflow APIs Architecture Big Data BigQuery CI/CD DataOps Data pipelines Data quality Data warehouse Data Warehousing dbt DevOps Engineering ETL GCP Git Google Cloud Java Kafka Machine Learning Pipelines Python RDBMS Spark SQL SSIS Streaming Talend Terraform T-SQL

Region: Asia/Pacific
Country: India

More jobs like this