Senior Data Engineer
Jakarta, Indonesia
Grab
Grab is Southeast Asia’s leading superapp. It provides everyday services like Deliveries, Mobility, Financial Services, and More.Company Description
Life at Grab
At Grab, every Grabber is guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principles - the 4Hs: Heart, Hunger, Honour and Humility. These principles guide and help us make decisions as we work to create economic empowerment for the people of Southeast Asia.
Get to know the Team
Lending team at Grab is dedicated to building safe, secure, and adaptable loan products catering to all user segments across SEA. Our mission is to promote financial inclusion and support underbanked partners across the region. Data plays a pivotal role in our lending operations, guiding decisions across credit assessment, collections, reporting, analytics, and beyond
We are a distributed team majorly in 2 different locations: Singapore and India. Our communication is in English, both in spoken and written form.
Job Description
As the Data engineer in the Lending Data Engineering team, you will work closely with data modelers, product analytics, product managers, software engineers and business stakeholders across the SEA in understanding the business and data requirements. You will be responsible for building and managing the data asset, including acquisition, storage, processing and consumption channels, and using some of the most scalable and resilient open source big data technologies like Flink, Airflow, Spark, Kafka, Trino and more on cloud infrastructure. You are encouraged to think out of the box and have fun exploring the latest patterns and designs.
The Day-to-Day Activities
Developing and maintaining scalable and reliable ETL pipelines and processes to ingest data from a large number and variety of data sources
Developing a deep understanding of real-time data productions availability to inform on the real time metric definitions
Develop data quality checks and establish best practices for data governance, quality assurance, data cleansing, and ETL-related activities
Develop familiarity with the existing inbuilt data platform tools and utilize them efficiently to set up the data pipelines.
Maintaining and optimizing the performance of our data analytics infrastructure to ensure accurate, reliable and timely delivery of key insights for decision making
Design and deliver the next-gen data lifecycle management suite of tools/frameworks, including ingestion and consumption on the top of the data lake to support real-time, API-based and serverless use-cases, along with batch as relevant.
Build solutions leveraging AWS services such as Glue, Redshift, Athena, Lambda, S3, Step Functions, EMR, and Kinesis to enable efficient data processing and analytics.
Develop a deep understanding of real-time data production availability to inform real-time metric definitions using tools like Amazon MSK or Kinesis Data Streams.
Implement and monitor data quality checks and establish best practices for data governance, quality assurance, data cleansing, and ETL-related activities using AWS Glue DataBrew or similar tools.
Qualifications
The Must-Haves
At least 5+ years of relevant experience in developing scalable, secured, distributed, fault tolerant, resilient & mission-critical data pipelines.
Proficiency in at least one of the programming languages Python, Scala or Java.
Strong understanding of big data technologies like Flink, Spark, Trino, Airflow, Kafka, and familiarity with AWS services like EMR, Glue, Redshift, Kinesis, and Athena.
Experience with SQL, schema design and data modeling.
Hands-on experience with AWS storage solutions (S3, DynamoDB) and query engines (Athena, Redshift Spectrum).
Experience with different databases – NoSQL, Columnar, Relational.
You have a hunger for consuming data, new data technologies, and discovering new and innovative solutions to the company's data needs.
Develop familiarity with in-house and AWS-native data platform tools to efficiently set up data pipelines.
Ability to design event-driven architectures using SNS, SQS, Lambda, or similar AWS serverless technologies.
You are organized, insightful and can communicate your observations well, both written and verbally to your stakeholders to share updates and coordinate the development of data pipelines
The Nice-to-Haves
You have a degree or higher in Computer Science, Electronics or Electrical Engineering, Software Engineering, Information Technology or other related technical disciplines.
You have a good understanding of Data Structure or Algorithms or Machine Learning models.
Additional Information
Benefits at Grab:
We care deeply about your well-being and are committed to supporting you every step of the way. Here are some of the global benefits we offer:
Protect and provide for your loved ones with peace of mind, knowing we have your back with Term Life Insurance and comprehensive Medical Insurance.
Craft a benefits package that suits your unique needs and aspirations with GrabFlex, because we believe in empowering you to thrive.
Embrace the magic of new life and create lasting memories with your family through Maternity and Paternity Leave.
Life can be overwhelming, but you're never alone. Our confidential Grabber Assistance Programme is here to guide and uplift you and your loved ones through life's challenges.
Your well-being is our priority. Benefit from our holistic well-being initiatives through Wellbeing@Grab, including health programmes, informative webinars, and vibrant carnivals.
Achieve a harmonious work-life balance with our FlexWork arrangements, allowing you to adapt and thrive in your personal and professional life.
We’ve got many different benefits hyper localised in each country. Speak to your recruiter during your interview to find out more.
What we stand for at Grab:
We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique. If you require accommodations to fully participate in the recruitment process, you are encouraged to include your request(s) when applying.
We deliver the greatest impact and ideas when we bring together diverse perspectives. It is what enables us to spread opportunities to Grabbers and our partners. It’s not a box-ticking exercise; it’s who we are.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture Athena AWS AWS Glue AWS Glue DataBrew Big Data Computer Science Data Analytics Data governance Data pipelines Data quality DynamoDB Engineering ETL Flink Java Kafka Kinesis Lambda Machine Learning ML models NoSQL Open Source Pipelines Python Redshift Scala Spark SQL Step Functions
Perks/benefits: Career development Health care Medical leave Parental leave Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.