DWH Developer - Data Lakehouse Section, Analytics Data Engineering Department

Rakuten Crimson House

Rakuten

楽天グループ株式会社のコーポレートサイトです。企業情報や投資家情報、プレスリリース、サステナビリティ情報、採用情報などを掲載しています。楽天グループは、イノベーションを通じて、人々と社会をエンパワーメントすることを目指しています。

View all jobs at Rakuten

Apply now Apply later

Job Description:

Business Overview

Rakuten group has almost 100 million customer base in Japan and 1 billion globally as well, providing more than 70 services in a variety such as ecommerce, payment services, financial services, telecommunication, media, sports, etc.

Position: 

Why We Hire 

We are looking for an end-to-end data engineer with experience in the development of data pipeline platforms and in the modelling and querying of data for Business Intelligence purposes.

The technical architecture comprises a MinIO and Hadoop platform utilizing Python/Linux batch and also Kafka data ingestion mechanisms.

This is a Dev/Ops role where you would be responsible for supporting existing production data pipelines and adhoc BI enhancement requests, while expanding our new MinIO, Hadoop and Google BigQuerybased data platform.

  

Position Details 

- Develop, enhance and maintain data pipeline applications and data models on a rotational on-call basis in a 24x7x365 environment. 

- Trouble-shoot the causes of adhoc daily production failures and provide effective and documented solutions. 

- Continuous improvement initiatives in data ingestion performance, ingestion models, data integrity and data availability. 

- Work with the business in analyzing and documenting new functionality requests and managing the implementation of those within an Agile ownership model.   

- Convert necessary BI related business requirements into mapping documents; design and model new or existing mart alongside application of various suitable modelling methodologies. 

  

Mandatory Qualifications: 

- B.S. in Computer Science or in related fields. 

- More than 3 years’ experience with BI data driven development. 

- Expert SQL capability in querying Big Data/ large data sets (MinIO, Hadoop, etc.) to extract BI- insights. 

- Programming languages such as Python/Scala/PLSQL/Java. 

- Application development using workflow engines and job schedulers such as Airflow.

- Development and operation of data pipeline leveraging big data technologies such as Spark (including SQL development), Map Reduce, Hive, Kafka, Sqoop, NoSQL Databases as well as traditional DB and file based data integration solutions. 

- Database development (eg. Oracle, MySQL, SQLServer, DB2..) 

- Shell-scripting languages such as Bash. 

- Formal analysis and documentation of BI solutions. 

- Distributed version control system such as Git. 

- Initiative and the ability to work independently and in a team. We are an Agile environment. 

  

Desired Qualifications: 

- BI reporting tools, including administration, modeling, and report/dashboard development. 

- BI Modelling of data marts using ER hybrid, Kimball, Data Vault methodologies. 

- Experience in Google Big Query. 

- Experience in AtScale and Presto. 

- Operational experience in developing and supporting high availability applications / systems. 

- Capability to self-manage and also manage small projects. 

#engineer #datascientist #technologyservicediv

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Agile Airflow Architecture Big Data BigQuery Business Intelligence Computer Science Data pipelines DB2 E-commerce Engineering Git Hadoop Java Kafka Linux Map Reduce MySQL NoSQL Oracle Pipelines Python Scala Spark SQL

Region: Asia/Pacific
Country: Japan

More jobs like this