Data Engineer

Bengaluru, Karnataka, India

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

This role is for one of the Weekday's clients

We are looking for a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work with both structured and unstructured data, ensuring seamless data ingestion, transformation, and storage to support analytics, machine learning, and business intelligence initiatives. You will collaborate closely with data scientists, analysts, and software engineers to develop high-performance data solutions.

Requirements

Key Responsibilities:

  • Design, develop, and manage ETL/ELT pipelines for processing large-scale datasets efficiently.
  • Work with SQL and NoSQL databases to ensure optimized data storage and retrieval.
  • Develop and maintain data lakes and data warehouses using cloud-based solutions such as AWS, GCP, or Azure.
  • Implement data quality, integrity, and governance best practices for validation and monitoring.
  • Optimize data workflows and performance tuning to enhance query speed and system efficiency.
  • Collaborate with cross-functional teams to integrate data solutions into various applications and services.
  • Implement real-time and batch data processing using tools like Apache Spark, Kafka, or Flink.
  • Work with cloud-based data services (BigQuery, Redshift, Snowflake) to build scalable and cost-effective solutions.
  • Automate data pipeline deployment using CI/CD and infrastructure-as-code tools.
  • Monitor and troubleshoot data pipeline issues to minimize downtime and ensure reliability.

Required Skills & Qualifications:

  • 3+ years of experience in data engineering, data architecture, or related fields.
  • Strong proficiency in Python, SQL, and scripting for data processing.
  • Hands-on experience with big data frameworks such as Apache Spark, Hadoop, or Flink.
  • Experience with ETL tools like Apache Airflow, DBT, or Talend.
  • Knowledge of cloud platforms (AWS, GCP, Azure) and their data services (Redshift, BigQuery, Snowflake, etc.).
  • Familiarity with data modeling, indexing, and query optimization techniques.
  • Experience with real-time data streaming using Kafka, Kinesis, or Pub/Sub.
  • Proficiency in Docker and Kubernetes for deploying data pipelines.
  • Strong problem-solving and analytical skills, with a focus on performance optimization.
  • Understanding of data security, governance, and compliance best practices.

Preferred Qualifications:

  • Experience integrating machine learning workflows into data engineering pipelines.
  • Knowledge of Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation.
  • Familiarity with graph databases and time-series databases.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Azure Big Data BigQuery Business Intelligence CI/CD CloudFormation Data pipelines Data quality dbt Docker ELT Engineering ETL Flink GCP Hadoop Kafka Kinesis Kubernetes Machine Learning NoSQL Pipelines Python Redshift Security Snowflake Spark SQL Streaming Talend Terraform Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this