Data Engineer - Data Warehouse

Indonesia - Jakarta, Green Office Park 1

Traveloka

Explore the world & live life your way. Best prices for hotels, flights, & attractions. Plan your own perfect trip.

View all jobs at Traveloka

Apply now Apply later

It's fun to work in a company where people truly BELIEVE in what they're doing!

‎ 

‎ 

Job Description

The Data Team at Traveloka consists of a diverse team of Data Analysts, Scientists and Engineers, working together as critical partners to the business. We proactively think about the needs of the customers and the business, anticipating customer needs and market trends. From our vantage point, we have a comprehensive view of our customers and serve their needs in a responsible, privacy-conscientious way. From AI/ML products to analytics to dashboards, reproducibility and full version control is a default in our systems so that you can easily learn from your teammates and build on foundations that have already been developed. Our collaborative team will ensure that you’re more productive than you’ve ever been.

The data warehouse engineering team plays an important role in the data team. We build, govern and maintain data warehouse platforms and environments as the foundation and source for every data product created by data analysts and scientists.

  • Define data model convention and governance

  • Design, develop and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc)

  • Design, develop and maintain data pipeline framework (combined open source and internal software to build and govern data pipelines)

  • Create and manage data pipelines infrastructures

  • Continuously seek ways to optimize existing data processing to be cost and time efficient

  • Ensure good data governance and quality through build monitoring systems to monitor data quality in data warehouse.

‎ 

Requirements

  • Fluent in Python and advanced-SQL

  • Preferably familiar with data warehouse environments (eg: Google BigQuery, AWS Redshift, Snowflake)

  • Preferably familiar with data transformation or processing framework (eg: dbt, Dataform, Spark, Hive, etc)

  • Preferably familiar with data processing technology (Google Dataflow, Google Dataproc, etc)

  • Preferably familiar with orchestration tool (eg: Airflow, Argo, Azkaban, etc)

  • Understand data warehousing concept (eg: Kimball, Inmon, data vault, etc) and experience in data modeling and measure + improve data quality

  • Preferably understand basic containerization and microservice concept (eg: Docker, Kubernetes)

  • Having knowledge in machine learning, building robust API, and web development will be an advantage

  • Minimum 2 to 3 years of experience working in Data.

‎ 

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow APIs AWS Azkaban BigQuery Dataflow Data governance Data pipelines Dataproc Data quality Data warehouse Data Warehousing dbt Docker ELT Engineering ETL Kubernetes Machine Learning Open Source Pipelines Privacy Python Redshift Snowflake Spark SQL

Perks/benefits: Career development

Region: Asia/Pacific
Country: Indonesia

More jobs like this