AWS Data Engineer

Hyderabad, India

Apply now Apply later

AWS Data Engineer

We are seeking a highly motivated AWS data engineer to join our team. In this role, you will be responsible for designing, developing, and implementing data pipelines that ingest, transform, and load data into our data warehouse and data lake on AWS.

Responsibilities:

  • Design, develop, and maintain scalable and efficient data pipelines for batch and real-time data sources, utilizing AWS services like S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis.
  • Build ETL/ELT pipelines to handle both structured and unstructured data.
  • Utilize Python and PySpark to process and transform large datasets.
  • Implement ETL (Extract, Transform, Load) processes using a combination of Informatica PowerCenter and custom-built Python scripts.
  • Build and maintain data models to optimize data storage, retrieval, and analysis.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.
  • Develop and implement unit tests and integration tests to ensure data quality and pipeline reliability.
  • Monitor and troubleshoot data pipelines to identify and resolve issues proactively.
  • Automate data pipeline deployment processes using AWS CodePipeline or similar tools.
  • Stay up-to-date on the latest trends and technologies in cloud data engineering.

Previous Working Experience:

  • Experience building data ingestion (ETL/ELT) pipelines for various data sources.
  • Experience building data warehouses and data lakes.


Requirements


Qualifications:

  • 5-10 years of experience in data engineering or a related field.
  • Proven experience with AWS cloud services, including S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis.
  • Proficiency in Python and PySpark for data processing and transformation.
  • Experience with ETL tools like Informatica PowerCenter.
  • Strong understanding of data modeling concepts and techniques.
  • Excellent problem-solving and analytical skills.
  • Experience working in a collaborative and fast-paced environment.
  • Excellent communication and interpersonal skills.

Bonus Points:

  • Experience with CI/CD pipelines for data engineering.
  • Experience with cloud security best practices.
  • Experience with data governance and compliance.
  • Experience with data visualization tools.


Benefits

Benefits

Opportunity to work with cutting-edge technologies in a fast-paced environment.

Be part of a collaborative and supportive team

Work on impactful projects that make a difference.



Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: AWS CI/CD Data governance Data pipelines Data quality Data visualization Data warehouse DynamoDB ELT Engineering ETL Informatica Kinesis Lambda Pipelines PySpark Python Redshift Security Unstructured data

Region: Asia/Pacific
Country: India

More jobs like this