Data Engineer - Data Warehouse
Indonesia - Jakarta, Green Office Park 1
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Traveloka
Book cheap flights, hotels, and activities with Traveloka. Discover global travel deals and plan your perfect trip • Trusted by millions worldwide • One place for all your needs • Flexible booking options • Secure & convenient paymentIt's fun to work in a company where people truly BELIEVE in what they're doing!
Job Description
- The Data Team at Traveloka consists of a diverse team of Data Analysts, Scientists and Engineers, working together as critical partners to the business. We proactively think about the needs of the customers and the business, anticipating customer needs and market trends. From our vantage point, we have a comprehensive view of our customers and serve their needs in a responsible, privacy-conscientious way. From AI/ML products to analytics to dashboards, reproducibility and full version control is a default in our systems so that you can easily learn from your teammates and build on foundations that have already been developed. Our collaborative team will ensure that you’re more productive than you’ve ever been.
The data warehouse engineering team plays an important role in the data team. We build, govern and maintain data warehouse platforms and environments as the foundation and source for every data product created by data analysts and scientists.- Has 3 to 4 years working experiences in a similar/relevant area
- Define data model convention and governance
- Design, develop and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc)
- Design, develop and maintain data pipeline framework (combined open source and internal software to build and govern data pipelines)
- Create and manage data pipelines infrastructures
- Continuously seek ways to optimize existing data processing to be cost and time efficient
- Ensure good data governance and quality through build monitoring systems to monitor data quality in data warehouse.
Requirements
- Fluent in Python and advanced-SQL
- Preferably familiar with data warehouse environments (eg: Google BigQuery, AWS Redshift, Snowflake)
- Preferably familiar with data transformation or processing framework (eg: dbt, Dataform, Spark, Hive, etc)
- Preferably familiar with data processing technology (Google Dataflow, Google Dataproc, etc)
- Preferably familiar with orchestration tool (eg: Airflow, Argo, Azkaban, etc)
- Understand data warehousing concept (eg: Kimball, Inmon, data vault, etc) and experience in data modeling and measure + improve data quality
- Preferably understand basic containerization and microservice concept (eg: Docker, Kubernetes)
- Having knowledge in machine learning, building robust API, and web development will be an advantage
- Able to build and maintain good relationship with stakeholders
- Able to translate business requirements to data warehouse modeling specifications
- Able to demonstrate creative problem solving skill
- A team player who loves to collaborate with others and can work independently when needed
If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs AWS Azkaban BigQuery Dataflow Data governance Data pipelines Dataproc Data quality Data warehouse Data Warehousing dbt Docker ELT Engineering ETL Kubernetes Machine Learning Open Source Pipelines Privacy Python Redshift Snowflake Spark SQL
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.