Data Engineer
[SG] Singapore
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
We are looking for ambitious and motivated talents who are excited about staying on the cutting edge of Technology and always keen on innovating new ways to drive growth and taking our startup to new heights.
WHO ARE WE? Pelago is a travel experiences platform created by Singapore Airlines Group. Think of us as a travel magazine that you can book - highly curated, visually inspiring, with the trust and quality of Singapore Airlines. We connect you with global, local cultures and ideas so you can expand your life.
We are a team of diverse, passionate, empowered, inclusive, authentic and open individuals who share the same values and strive towards a common goal!
WHAT CAN WE OFFER YOU? - A unique opportunity to take end-to-end ownership of your workstream to deliver real value to users. - Platforms to solve real user problems concerning travel planning & booking with innovative products/services.- An amazing peer group to work with, and the ability to learn from the similarly great minds around you. - An opportunity to be an integral part of shaping the company’s growth and culture with a diverse, fun, and dynamic environment with teammates from different parts of the world.- Competitive compensation and benefits - including work flexibility, insurance, remote working and more!
WHAT YOU WILL BE DOING IN THE ROLE?We’re looking for a motivated Data Engineer who can independently build and support both real-time and batch data pipelines. You’ll be responsible for enhancing our existing data infrastructure, providing clean data assets, and enabling ML/DS use cases.
Responsibilities: - Develop and maintain Kafka streaming pipelines and batch ETL workflows via AWS Glue (PySpark).- Orchestrate, schedule, and monitor pipelines using Airflow.- Build and update dbt transformation models and tests for Redshift.- Design, optimize, and support data warehouse structures in Redshift.- Leverage AWS ECS, Lambda, Python, and SQL for lightweight compute and integration tasks.- Troubleshoot job failures, data inconsistencies, and apply hotfixes swiftly.- Collaborate with ML/DS teams to deliver feature pipelines and data for modeling.- Promote best practices in data design, governance, and architecture.
Tech Stack:
- Streaming & Batch: Kafka, AWS Glue (PySpark), Airflow- Data Warehouse & Storage: Redshift, dbt, Python, SQL- Cloud Services: AWS ECS, Lambda- Others: Strong understanding of data principles, architectures, processing patterns
WHAT EXPERTISES ARE MUST HAVES FOR THE ROLE?- 3–5 years in data engineering or similar roles.- Hands-on experience with Kafka, AWS Glue (PySpark), Redshift, Airflow, dbt, Python, and SQL.- Strong foundation in data architecture, modeling, and engineering patterns.- Proven ability to own end-to-end pipelines in both real-time and batch contexts.- Skilled in debugging and resolving pipeline failures effectively.
WHAT EXPERTISES ARE GOOD TO HAVE?- Production experience with AWS ECS and Lambda.- Familiarity with ML/DS feature pipeline development.- Understanding of data quality frameworks and observability in pipelines.- AWS certifications (e.g., AWS Certified Data Analytics).If you’re as excited as we are in this journey, do apply directly with a copy of your full resume. We'll reach out to you as soon as we can!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS AWS Glue Data Analytics Data pipelines Data quality Data warehouse dbt ECS Engineering ETL Kafka Lambda Machine Learning Pipelines PySpark Python Redshift SQL Streaming
Perks/benefits: Competitive pay Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.