Data Engineer - Zurich Asuransi Indonesia

Jakarta, ID

Apply now Apply later

Job Summary

The data wranglers of the business world They transform raw data into a usable format for analysis, building the infrastructure that empowers data scientists and analysts to unlock valuable insights By identifying trends and developing strategies, they bridge the gap between data and actionable decisions, ultimately driving organizational efficiency and performance They transform raw data into a usable format for analysis, building the infrastructure that empowers data scientists and analysts to unlock valuable insights By identifying trends and developing strategies, they bridge the gap between data and actionable decisions, ultimately driving organizational efficiency and performance

Job Qualifications

 

  • At least 3+ years’ experience with SparkSQL, Python and PySpark for data engineering workflow
  • Strong proficiency in dimensional modeling and star schema design for analytical workloads
  • Experience implementing automated testing and CI/CD pipelines for data workflows
  • Familiarity with GitHub operations and collaborative development practices
  • Demonstrated ability to optimize engineering workflow jobs for performance and cost efficiency
  • Experience with cloud data services and infrastructure (AWS, Azure, or GCP)
  • Proficiency with IDE tools such as Visual Studio Code for efficient development
  • Experience with Databricks platform will be a plus

Job Functions

 

  • Design and implement ETL/ELT pipelines using Spark SQL and Python within Databricks Medallion architecture
  • Develop dimensional data models following star schema methodology with proper fact and dimension table design, SCD implementation, and optimization for analytical workloads
  • Optimize Spark SQL and DataFrame operations through appropriate partitioning strategies, clustering and join optimizations to maximize performance and minimize costs
  • Build comprehensive data quality frameworks with automated validation checks, statistical profiling, exception handling, and data reconciliation processes
  • Establish CI/CD pipelines incorporating version control, automated testing including but not limited to unit test, integration test, smoke test, etc.
  • Implement data governance standards including row-level and column-level security policies for access controls and compliance requirements
  • Create and maintain technical documentation including ERDs, schema specifications, data lineage diagrams, and metadata repositories

Why Zurich

 

At Zurich, we like to think outside the box and challenge the status quo. We take an optimistic approach by focusing on the positives and constantly asking What can go right? 

We are an equal opportunity employer who knows that each employee is unique - that’s what makes our team so great! 
Join us as we constantly explore new ways to protect our customers and the planet.
 

  • Location(s):  ID - Head Office - MT Haryono 
  • Remote working:
  • Schedule: Full Time
  • Recruiter name: Ayu Candra Sekar Rurisa
  • Closing date:
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Architecture AWS Azure CI/CD Clustering Databricks Data governance Data quality ELT Engineering ETL GCP GitHub Pipelines PySpark Python Security Spark SQL Statistics Testing

Region: Asia/Pacific
Country: Indonesia

More jobs like this