AWS Data Engineer

Colombo, Western Province, Sri Lanka

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

  • Design and build scalable ETL pipelines and data integration workflows using AWS services and Python.
  • Develop and optimize data lake and data warehouse solutions for structured and unstructured data.
  • Leverage Apache Spark for large-scale data processing and transformation tasks.
  • Collaborate with cross-functional teams to gather requirements and deliver clean, usable datasets for analytics and reporting.
  • Ensure high data quality, security, and compliance in all stages of the data lifecycle.

Requirements

  • 5+ years of hands-on experience in data engineering roles.
  • Proficiency in Python for building and automating data pipelines.
  • Strong experience with AWS services (S3, Glue, Redshift, Lambda, etc.).
  • Solid understanding of ETL processes and modern data warehousing concepts.
  • Experience with big data tools, especially Apache Spark (PySpark preferred).
  • Familiarity with DevOps and CI/CD practices for data pipeline deployment.
  • Knowledge of data governance and cataloging tools.
  • Strong problem-solving, communication, and collaboration skills.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: AWS Big Data CI/CD Data governance Data pipelines Data quality Data warehouse Data Warehousing DevOps Engineering ETL Lambda Pipelines PySpark Python Redshift Security Spark Unstructured data

Region: Asia/Pacific
Country: Sri Lanka

More jobs like this