Data Engineer

Coimbatore, Tamil Nadu, India

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

Apply now Apply later

This role is for one of the Weekday's clients

We are seeking a highly skilled Data Engineer with a strong background in building and managing scalable data pipelines. You will play a critical role in designing, developing, and optimizing data infrastructure to support analytics and real-time processing needs. This role requires expertise in relational and NoSQL databases, data warehouses, real-time streaming, ETL processes, and cloud-based data solutions.

Requirements

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines for structured and unstructured data.
  • Work with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra) to manage and process large datasets efficiently.
  • Build and maintain data pipelines with data warehouses like Snowflake, Redshift for efficient storage and retrieval.
  • Process unstructured data stored in S3 using Athena, Glue, and other AWS services to derive meaningful insights.
  • Implement real-time data streaming and messaging solutions using Kafka to enable low-latency data processing.
  • Develop ETL pipelines to ensure smooth data integration, transformation, and optimization for analytics and reporting.
  • Write high-performance code using Python, Java, or Scala to process and analyze large datasets efficiently.
  • Utilize Apache Spark for big data processing and analytics to enhance data engineering workflows.
  • Work with cloud platforms like AWS, GCP, or Azure to build scalable and resilient data infrastructure.

Required Skills & Experience

  • 5+ years of experience in data engineering, with a focus on data pipeline development, data processing, and optimization.
  • Strong expertise in Relational Databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra).
  • Experience working with data warehouses such as Snowflake and Redshift for analytical workloads.
  • Hands-on experience in processing unstructured data using tools like Athena, Glue, and S3.
  • Proficiency in real-time data streaming using Kafka for handling large-scale event-driven data processing.
  • Strong knowledge of ETL processes, data integration techniques, and pipeline performance optimization.
  • Proficiency in programming languages like Python, Java, or Scala for data processing and automation.
  • Experience with Apache Spark for distributed big data processing is a plus.
  • Familiarity with cloud platforms (AWS, GCP, or Azure) for data infrastructure management is an advantage.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Athena AWS Azure Big Data Cassandra Data pipelines Engineering ETL GCP Java Kafka MongoDB MySQL NoSQL Pipelines PostgreSQL Python RDBMS Redshift Scala Snowflake Spark Streaming Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this