Data Engineer

Singapore, Singapore

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Singtel

The Singtel Group, Asia's leading communications group provides a diverse range of services including fixed, mobile, data, internet, TV, infocomms technology (ICT) and digital solutions.

View all jobs at Singtel

Apply now Apply later

An empowering career at Singtel begins with a Hello. Our purpose, to Empower Every Generation, connects people to the possibilities they need to excel. Every "hello" at Singtel opens doors to new initiatives, growth, and BIG possibilities that takes your career to new heights. So, when you say hello to us, you are really empowered to say…“Hello BIG Possibilities”.

Be a Part of Something BIG!  

  • Responsible for building and supporting data ingestion and transformation pipelines in a modern hybrid cloud platform
  • Independently develop basic batch and streaming pipelines, working with cloud tools such as Databricks and Kafka under the guidance of senior engineers
  • Contribute to the delivery of reliable, secure, and high-quality data for analytics, reporting, and machine learning use cases
  • Gain exposure to enterprise-scale data architecture, while growing into more advanced engineering responsibilities over time.

Make An Impact By

  • Build and maintain data ingestion pipelines for batch and streaming data sources using tools like Databricks and Kafka
  • Perform data transformation and cleansing using PySpark or SQL based on business and technical requirements
  • Monitor and troubleshoot data workflows to ensure data quality and pipeline reliability
  • Work closely with senior data engineers to understand platform architecture and apply best practices in pipeline design
  • Assist in integrating data from diverse source systems (files, APIs, databases, streaming)
  • Help maintain metadata and pipeline documentation for transparency and traceability
  • Participate in integrating pipelines with tools such as Microsoft Fabric, Databricks, Delta Lake, and other platform components
  • Contribute to automation efforts using version control and CI/CD workflows
  • Apply basic data governance and access control policies during implementatio

 

Skills to Succeed

  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • 1–3 years of experience in data engineering or data platform development
  • Proven ability to independently build basic batch or streaming data pipelines
  • Hands-on experience with Python and SQL for data transformation and validation
  • Familiarity with Apache Spark (especially PySpark) and large-scale data processing concepts
  • Self-starter with strong problem-solving skills and a keen attention to detail
  • Able to work independently while collaborating effectively with senior engineers and other stakeholders
  • Strong documentation and communication skills.

 

Rewards that Go Beyond

  • Full suite of health and wellness benefits  
  • Ongoing training and development programs  
  • Internal mobility opportunities

Your Career Growth Starts Here. Apply Now!

We are committed to a safe and healthy environment for our employees & customers and will require all prospective employees to be fully vaccinated.

 

 

 

 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture CI/CD Computer Science Databricks Data governance Data pipelines Data quality Engineering Excel Kafka Machine Learning Pipelines PySpark Python Spark SQL Streaming

Perks/benefits: Career development Health care Wellness

Region: Asia/Pacific
Country: Singapore

More jobs like this