Data Integration Engineer

Bengaluru, Karnataka, India

Weekday

At Weekday, we help companies hire engineers who are vouched by other software engineers. We are enabling engineers to earn passive income by leveraging & monetizing the unused information in their head about the best people they have worked...

View all jobs at Weekday

Apply now Apply later

This role is for one of the Weekday's clients

Min Experience: 3 years

Location: Bengaluru

JobType: full-time

Educational Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline.

Experience Requirements

  • Minimum 5 years of experience in Data Engineering, ideally within the manufacturing, logistics, or transportation sectors.
  • Proven track record of delivering robust analytics solutions.
  • Consulting background is a plus.

Requirements

Technical Expertise

  • Strong understanding of distributed computing principles.
  • Hands-on experience implementing data engineering solutions using Big Data technologies in both cloud and on-premise environments.
  • Skilled in managing large datasets, databases, data lakes, and cloud-based storage solutions.
  • Expertise in managing and optimizing Spark clusters and related Spark-based technologies.
  • Proficient in Python and PySpark for data transformation and processing tasks.
  • Advanced SQL skills with experience in relational databases (PostgreSQL, MySQL, Oracle) and NoSQL databases (MongoDB, Cassandra, DynamoDB).
  • Experience with cloud-based services such as AWS Glue, S3, Lambda, and Spark on AWS.
  • Solid understanding of data modeling methodologies including star schema, snowflake schema, and data vault.
  • Familiar with semantic modeling and BI tools like Power BI and Oracle Analytics Cloud.
  • Strong analytical and problem-solving skills with the ability to translate business strategy into actionable data and AI products.

Project-Specific Expertise

  • Data ingestion and transformation through AWS S3 (Landing, Bronze, Silver, Gold layers).
  • Data manipulation and transformation using Python and PySpark scripts.
  • Application-level data handling using PostgreSQL.
  • Focused on generating data extracts and variance analysis (no data science work involved).
  • Strong knowledge of AWS ecosystem and architecture.

Key Skills

Python, PySpark, PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, AWS (S3, Glue, Lambda), Data Modeling, Medallion Architecture, Spark, SQL, BI Tools

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture AWS AWS Glue Big Data Cassandra Computer Science Consulting DynamoDB Engineering Lambda MongoDB MySQL NoSQL Oracle PostgreSQL Power BI PySpark Python RDBMS Snowflake Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this