Senior Data Engineer

India - Remote

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

Apply now Apply later

This role is for one of the Weekday's clients

Min Experience: 7 years

JobType: full-time

We are looking for a highly skilled Senior Data Engineer to architect and develop robust, scalable data infrastructure that underpins cutting-edge AI and analytics solutions. In this role, you will design and build high-performance data pipelines, optimize data storage, and ensure the seamless availability of data for machine learning and business intelligence applications. The ideal candidate combines strong engineering fundamentals with deep expertise in cloud data platforms, real-time processing, and data governance.

Requirements

Key Responsibilities

  • Architect and implement scalable ETL/ELT pipelines for both batch and streaming data.
  • Design and build cloud-native data platforms, including data lakes, data warehouses, and feature stores.
  • Work with diverse data types—structured, semi-structured, and unstructured—at petabyte scale.
  • Optimize data pipelines for high throughput, low latency, cost-efficiency, and fault tolerance.
  • Ensure strong data governance through lineage tracking, quality validation, and metadata management.
  • Collaborate with Data Scientists and ML Engineers to prepare datasets for training, inference, and production use.
  • Build and maintain streaming data architectures using technologies like Kafka, Spark Streaming, or AWS Kinesis.
  • Automate infrastructure provisioning and deployment using tools like Terraform, CloudFormation, or Kubernetes operators.

Required Skills

  • 7+ years of experience in Data Engineering, Big Data, or cloud-based data platforms.
  • Strong coding skills in Python and SQL.
  • Deep understanding of distributed data processing systems (e.g., Spark, Hive, Presto, Dask).
  • Hands-on experience with cloud services (AWS, GCP, Azure) and tools such as BigQuery, Redshift, EMR, or Databricks.
  • Experience building event-driven and real-time data systems (e.g., Kafka, Pub/Sub, Flink).
  • Proficiency in data modeling (e.g., star schema, OLAP cubes, graph databases).
  • Knowledge of data security, encryption, and regulatory compliance (e.g., GDPR, HIPAA).

Preferred Skills

  • Experience enabling MLOps workflows through the development of feature stores and versioned datasets.
  • Familiarity with real-time analytics tools such as ClickHouse or Apache Pinot.
  • Exposure to data observability tools like Monte Carlo, Databand, or similar platforms.
  • Strong passion for building resilient, secure, and scalable data systems.
  • Keen interest in enabling AI/ML innovation through robust infrastructure.
  • Committed to automation, performance optimization, and engineering excellence.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture AWS Azure Big Data BigQuery Business Intelligence CloudFormation Databricks Data governance Data pipelines ELT Engineering ETL Flink GCP Kafka Kinesis Kubernetes Machine Learning MLOps Monte Carlo OLAP Pipelines Python Redshift Security Spark SQL Streaming Terraform

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this