Data Engineer - Data Warehouse
Indonesia - Jakarta, Green Office Park 1
Traveloka
Explore the world & live life your way. Best prices for hotels, flights, & attractions. Plan your own perfect trip.It's fun to work in a company where people truly BELIEVE in what they're doing!
Job Description
The Data Team at Traveloka consists of a diverse team of Data Analysts, Scientists and Engineers, working together as critical partners to the business. We proactively think about the needs of the customers and the business, anticipating customer needs and market trends. From our vantage point, we have a comprehensive view of our customers and serve their needs in a responsible, privacy-conscientious way. From AI/ML products to analytics to dashboards, reproducibility and full version control is a default in our systems so that you can easily learn from your teammates and build on foundations that have already been developed. Our collaborative team will ensure that you’re more productive than you’ve ever been.
Define data model convention and governance
Design, develop and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc)
Continuously seek ways to optimize existing data processing to be cost and time efficient
Ensure good data governance and quality through build monitoring systems to monitor data quality in data warehouse.
The data warehouse engineering team plays an important role in the data team. We build, govern and maintain data warehouse platforms and environments as the foundation and source for every data product created by data analysts and scientists.
Requirements
A degree in Computer Science or equivalent from a reputable university
Fluent in Python and advanced-SQL
Preferably familiar with data warehouse environments (eg: Google BigQuery, AWS Redshift, Snowflake)
Preferably familiar with data transformation or processing framework (eg: dbt, Dataform, Spark, Hive, etc)
Preferably familiar with data processing technology (Google Dataflow, Google Dataproc, etc)
Preferably familiar with orchestration tool (eg: Airflow, Argo, Azkaban, etc)
Understand data warehousing concept (eg: Kimball, Inmon, data vault, etc) and experience in data modeling and measure + improve data quality
Preferably understand basic containerization and microservice concept (eg: Docker, Kubernetes)
Having knowledge in machine learning, building robust API, and web development will be an advantage
If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs AWS Azkaban BigQuery Computer Science Dataflow Data governance Data pipelines Dataproc Data quality Data warehouse Data Warehousing dbt Docker ELT Engineering ETL Kubernetes Machine Learning Pipelines Privacy Python Redshift Snowflake Spark SQL
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.