Senior Data Engineer

Bengaluru, Karnataka, India

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

Apply now Apply later

This role is for one of the Weekday's clients

Salary range: Rs 2000000 - Rs 4500000 (ie INR 20-45 LPA)

Min Experience: 6 years

Location: Bangalore

JobType: full-time

Requirements

Qualifications:

  • 5–8 years of experience in data engineering with expertise in building, maintaining, and optimizing scalable data pipelines.
  • Strong command of relational databases such as MySQL and PostgreSQL, as well as NoSQL databases like MongoDB and Cassandra.
  • Hands-on experience working with data warehouses like Snowflake or Redshift.
  • Experience processing unstructured data using tools like Athena, Glue, and storing data in Amazon S3.
  • Proficient in using Kafka for real-time data streaming and messaging integration.
  • Solid understanding of ETL processes, data integration, and performance optimization of data pipelines.
  • Programming proficiency in Python, Java, or Scala for data processing and scripting.
  • Experience working with Apache Spark for big data analytics is a strong advantage.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure for deploying and maintaining data infrastructure.
  • Strong analytical and problem-solving abilities with meticulous attention to detail.
  • Excellent communication and teamwork skills with the ability to work in cross-functional environments.

Key Responsibilities:

  • Design, develop, and manage scalable and high-performance data pipelines that support seamless integration and transformation of data from diverse sources.
  • Work across both structured and unstructured datasets using relational and NoSQL databases.
  • Leverage Apache Spark for distributed computing and large-scale data processing.
  • Implement Kafka-based streaming solutions to enable real-time data flows between platforms and systems.
  • Collaborate with engineering, analytics, and business teams to understand data requirements and translate them into efficient solutions.
  • Continuously monitor, troubleshoot, and optimize data pipeline performance for reliability and scalability.
  • Ensure high data quality, consistency, and governance across systems.
  • Stay current with emerging technologies and trends in the data engineering and big data landscape, and apply them to improve existing systems.

Key Skills:

  • Python
  • Data Engineering
  • ETL
  • Snowflake
  • SQL
  • AWS
  • Kafka
  • Apache Spark
  • Data Pipelines
  • NoSQL (MongoDB, Cassandra)
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Athena AWS Azure Big Data Cassandra Data Analytics Data pipelines Data quality Engineering ETL GCP Java Kafka MongoDB MySQL NoSQL Pipelines PostgreSQL Python RDBMS Redshift Scala Snowflake Spark SQL Streaming Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this