AES - DE- Python Backend Engineer

India

Zensar

Zensar is a global organization which conceptualizes, builds, and manages digital products through experience design, data engineering, and advanced analytics for over 200 leading companies. Our solutions leverage industry-leading platforms to...

View all jobs at Zensar

Apply now Apply later

Job Title: Data Engineer

# of positions: 3
Location: (Remote)
Experience Level: 4+ Years
Employment Type: Full-time/Contract

Duration: Current visibility is for 2 qtrs and will extend depending on the performance of the team

 

Job Summary:
We are seeking a skilled Data Engineer with 4+ years of experience in designing, developing, and optimizing data pipelines. The ideal candidate will have strong expertise in Python and SQL along with experience in database management, ETL processes, and cloud technologies.

 

Key Responsibilities:

Design, develop, and maintain scalable and efficient data pipelines using Python and SQL.
Optimize and maintain ETL processes for data ingestion, transformation, and storage.
Develop and manage database schemas, tables, and indexes to support data processing.
Work with structured and unstructured data to clean, transform, and aggregate large datasets.
Collaborate with data scientists, analysts, and software engineers to ensure smooth data flow and accessibility.
Implement data quality checks and validation to ensure accuracy and consistency.
Optimize SQL queries for performance and scalability.
Work with cloud-based data solutions (AWS, GCP, or Azure) for storage and processing.
Automate workflows using Python scripting and scheduling tools like Airflow or Cron Jobs.
Monitor and troubleshoot data pipelines to ensure reliability and performance
Build and maintain the CI/CD pipelines
 

Required Skills & Qualifications:

4+ years of experience in Data Engineering, Data Warehousing, or related fields.
Strong proficiency in Python (Pandas, NumPy, PySpark, etc.) and SQL (Snowflake, PostgreSQL, MySQL, SQL Server, etc.).
Experience with containerization (Docker & Git bucket).
Hands-on experience with ETL tools and data pipeline orchestration (Snowflake, Python).
Experience with relational and NoSQL databases (MongoDB, Cassandra, etc.).
Proficiency in query optimization and database performance tuning.
Experience with cloud platforms (AWS,) and services like S3, BigQuery, or Snowflake.
Familiarity with data modeling concepts and best practices.
Understanding of big data frameworks (Hadoop, Spark, etc.) is a plus.
Strong problem-solving skills and ability to work in a collaborative environment.
 

Preferred Skills:

Familiarity with machine learning pipelines and integrating data with AI/ML models.
Experience working in Agile/Scrum methodologies.
Knowledge of data governance and security best practice

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow AWS Azure Big Data BigQuery Cassandra CI/CD Data governance Data pipelines Data quality Data Warehousing Docker Engineering ETL GCP Git Hadoop Machine Learning ML models MongoDB MySQL NoSQL NumPy Pandas Pipelines PostgreSQL PySpark Python Scrum Security Snowflake Spark SQL Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this