Databricks Data Engineer

Singapore, Singapore, Singapore

Apply now Apply later

Job Summary:

We are seeking a skilled and motivated Databricks Data Engineer to design, build, and maintain scalable data pipelines and data platforms using Databricks on Azure/AWS. You will work closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making across the organization.

Key Responsibilities:

  • Design and implement robust, scalable, and high-performance data pipelines using Apache Spark on Databricks.
  • Develop and maintain ETL/ELT processes using Databricks Notebooks, Delta Lake, and Spark SQL.
  • Optimize and troubleshoot large-scale distributed data processing jobs.
  • Collaborate with data scientists and analysts to deliver data products that support analytics, reporting, and machine learning use cases.
  • Develop and enforce best practices in data engineering and DevOps processes.
  • Implement CI/CD pipelines for Databricks workflows using tools such as Azure DevOps, GitHub Actions, or Terraform.
  • Ensure data quality, integrity, security, and compliance across platforms.
  • Support migration of data systems to the cloud or from legacy environments to Databricks.

Requirements

Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience in data engineering or related roles.
  • Proficiency with Apache Spark and the Databricks platform.
  • Strong experience with Python, SQL, and PySpark.
  • Experience with Delta Lake, Lakehouse Architecture, and data partitioning strategies.
  • Hands-on experience with cloud platforms (Azure, AWS, or GCP).
  • Familiarity with Databricks Workflows, Unity Catalog, and DBFS.
  • Experience working with structured and unstructured data at scale.
  • Good understanding of data warehousing, data modeling, and data governance concepts.

Preferred Qualifications:

  • Databricks certification (e.g., Databricks Certified Data Engineer Associate or Professional).
  • Experience integrating Databricks with Power BI, Tableau, or other BI tools.
  • Experience with Terraform, Databricks REST APIs, and infrastructure-as-code (IaC).
  • Knowledge of MLflow, Feature Store, and MLOps best practices is a plus.
  • Experience with streaming technologies like Structured Streaming or Kafka.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: APIs Architecture AWS Azure CI/CD Computer Science Databricks Data governance Data pipelines Data quality Data Warehousing DevOps ELT Engineering ETL GCP GitHub Kafka Machine Learning MLFlow MLOps Pipelines Power BI PySpark Python Security Spark SQL Streaming Tableau Terraform Unstructured data

Region: Asia/Pacific
Country: Singapore

More jobs like this