Data Engineer

Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

Love, Bonito

Love Bonito offers the latest in women’s fashion. Our thoughtfully-designed styles are made to inspire confidence and empower the everyday Asian woman.

View all jobs at Love, Bonito

Apply now Apply later

About Us

We pride ourselves as the best and largest vertically integrated, omni-channel women's fashion brand in the region. Founded in 2010, we have grown to 250 people strong, proudly headquartered in Singapore with country offices in Indonesia and Malaysia and an omni-channel presence across these 3 markets. In addition to our retail franchise in Cambodia, we ship internationally to 15 markets (Hong Kong, China, Philippines and Australia, New Zealand, US, Canada, Macau, Japan, Korea, Vietnam, Thailand, Myanmar, Cambodia & Brunei).

We are taking our definition of new female retail global, with our sights set on becoming the most thoughtful brand for the everyday woman.

About the Role

We are looking for a Junior Data Engineer to join our team and help build, maintain, and optimise our data pipelines. In this role, you will work closely with data analysts, data scientists, and software engineers to ensure efficient data flow, storage, and transformation. This is a great opportunity for someone passionate about data infrastructure, ETL pipelines, and cloud technologies to grow their skills in a fast-paced environment.

Accountabilities

  • Develop, develop and maintain, ETLpipelines for efficient data processing and integration
  • Ensure data quality, reliability, and integrity throughout all data processes
  • Create and maintain data models that support analytics, reporting, and machine learning needs
  • Support the development of RESTful APIs for ML model serving
  • Collaborate with data analysts, scientists, and relevant business stakeholders to understand data requirements and deliver scalable solutions
  • Maintain data security, governance, and compliance standards
  • Support visualisation tools (Tableau, Metabase)
  • Troubleshoot pipeline failures and implement preventative measures
  • Document comprehensive data flows, transformations, and technical processes
  • Experienced in building dashboard using various tools (Tableau, Metabase)

Critical success factors & key challenges (KPIs)

  • Data Pipeline SLA Adherence – 99.9%+ uptime target for data services
  • Data Quality and Accuracy (%) – Ensuring transformation correctness and data integrity
  • ETL Pipeline Success Rate (%) – Minimizing failed job runs and errors
  • Resource Efficiency ($) - Optimising runtime costs and processing speed

Requirements

Knowledge, experience & capabilities

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 3-4 years data engineering experience
  • Proficiency in SQL and big data systems like (Hadoop, Hive, Apache Spark, etc.)
  • Strong programming skills in Python, PySpark, Scala
  • Experience with data lakehouse architecture and Delta Lake concepts
  • Strong problem-solving skills and attention to detail
  • Advantageous to have knowledge of:Β 
    • Workflow orchestration (Airflow)
    • Cloud data warehouses (AWS Redshift/GCP Big Query)
    • AWS/GCP Cloud Services
    • Containerisation (Docker)
    • CI/CD practices for data pipelines

Benefits

1. Flexible Work Arrangement

  • Hybrid work and adjustable hours - as long as present during our core working hours

2. Staff Wellness

  • Comprehensive corporate insurance (Fully covered visits at our panel clinics, maternity reimbursement etc)

3. Learning and Career Development

  • Learning and development (i.e. subscription plans to best-in-class resources, personal development fund etc)
  • Dedicated leadership training for those of managerial responsibilities
  • Friday pm off for learning

4. #TeamLB perks

  • Generous staff discount off LB products
  • Corporate partnerships with a variety of companies
  • Welcome to #TeamLB swag and store gift cards (get your LB work outfit on us!)
  • Employee driven peer-to-peer recognition platform to honour and celebrate everyday achievements
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index πŸ’°

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture AWS Big Data BigQuery CI/CD Computer Science Data pipelines Data quality Docker Engineering ETL GCP Hadoop KPIs Machine Learning Metabase Pipelines PySpark Python Redshift Scala Security Spark SQL Tableau

Perks/benefits: Career development Flex hours Wellness

Region: Asia/Pacific
Country: Malaysia

More jobs like this