Data Engineer

India - Remote

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

Apply now Apply later

This role is for one of the Weekday's clients

Salary range: Rs 1800000 - Rs 2500000 (ie INR 18-25 LPA)

Min Experience: 5 years

JobType: full-time

We are seeking a highly skilled and experienced Data Engineer to join our growing data team. In this role, you will design, build, and maintain robust, scalable, and efficient data pipelines to support our analytics, machine learning, and business intelligence initiatives. This is a hands-on position where you’ll work with large-scale datasets and cutting-edge technologies, enabling data-driven decision-making across the organization.

You’ll collaborate with data scientists, analysts, and other engineers to ensure the seamless flow and transformation of data from a variety of sources. If you’re passionate about building end-to-end ETL pipelines, optimizing data architecture, and working in a cloud-native environment, this role is for you.

Requirements

Key Responsibilities:

  • Design, develop, and maintain scalable and reliable ETL processes and data pipelines using Python, PySpark, and SQL.
  • Work extensively with Snowflake for cloud-based data warehousing solutions.
  • Use Apache Airflow for orchestrating complex workflows and automating data processes.
  • Develop and maintain integrations between various data sources and storage systems, including AWS S3, Elasticsearch, and other AWS services.
  • Collaborate with cross-functional teams including data scientists, product managers, and business stakeholders to understand data needs and translate them into technical solutions.
  • Monitor, troubleshoot, and optimize pipeline performance to ensure high availability and low latency of data delivery.
  • Implement data quality checks and validation mechanisms to ensure the integrity and accuracy of datasets.
  • Ensure data governance, security, and compliance standards are met across all stages of data processing.
  • Stay current with emerging technologies and best practices in data engineering, and proactively recommend improvements.

Required Skills & Qualifications:

  • 5–10 years of professional experience in data engineering or a related role.
  • Strong programming skills in Python and experience with PySpark for distributed data processing.
  • Deep expertise in SQL for querying and transforming large datasets.
  • Hands-on experience with Snowflake, including data modeling, performance tuning, and security features.
  • Proficiency with Airflow for job scheduling and workflow orchestration.
  • Solid understanding of AWS services such as S3, Lambda, Glue, Redshift, and EC2.
  • Experience working with Elasticsearch for data search and analytics.
  • Proven track record of building and optimizing complex ETL pipelines and data architectures.
  • Strong problem-solving skills and the ability to work independently and in a team.
  • Excellent communication and documentation skills.

Preferred Qualifications:

  • Experience with infrastructure-as-code tools like Terraform or CloudFormation.
  • Knowledge of data lake and data mesh concepts.
  • Familiarity with CI/CD pipelines for data workflows.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Business Intelligence CI/CD CloudFormation Data governance Data pipelines Data quality Data Warehousing EC2 Elasticsearch Engineering ETL Lambda Machine Learning Pipelines PySpark Python Redshift Security Snowflake SQL Terraform

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this