Automation Data Engineer

Jakarta Selatan, DKI Jakarta, Indonesia

Apply now Apply later

We are currently seeking a passionate Data Engineer to join our intelligent Automation Team and help us finding out the best possible scenario in operating the vast amounts of data and help us make smarter decisions to deliver even better products. 

Your primary focus will be to design, implement and forecast the cost of data pipelines and infrastructure for the team to perform ETL, analysis, reporting, visualization and dashboarding of large amount of data. You will also help the team to produce the best quality data for process mining to improve and support Robotic Process Automation. With your help, we want to make the process discovery, monitoring, optimization and modelling become seamless yet reliable. This role will rely heavily on technical skills, creative solutions, thorough documentation, and timely delivery.   


Responsibilities: 

  1. Support Data Scientist in deploying model into production.
  2. Lead a team of Data Engineer to work with the team.
  3. Design and Manage data architecture and instance migrations to support team’s operations.
  4. Design and Create ETL pipeline from multiple event logs to be ready to be used for Process Mining Analysis.
  5. Support Data Scientist, Data analyst and other team to prepare data for visualization, modelling, and suggest the most efficient way to perform the tasks.
  6. Collaborate with Data Scientist, Data Analyst, RPA Engineer, product management and engineering departments to understand the company needs and devise possible solutions.
  7. Enhancing data collection procedures to include information that is relevant for building our analytical systems.
  8. Managing, processing, cleansing, and verifying the integrity of data used for analysis, storage and process.
  9. Doing ad-hoc task to pull, modify, and store the data to platforms that used by the team.


Requirements:

  1. Minimum 2 year of experience as a data engineer.
  2. Strong understanding in Python is a must (other programming language is a plus).
  3. Have advanced skill in docker for machine learning deployment (API).
  4. Have basic skill in kubernetes, airflow, docker swarm, or other.
  5. Experienced in cloud computing, knowing how to design architecture in the cloud is a must.
  6. Strong experience in code versioning: GIT, bitbucket (command).
  7. Strong knowledge of data storage and processing platform: ElasticSearch/OpenSearch, SQL platforms, Spark, Kafka.
  8. Know how to write shell script: bash.
  9. Know how to write SQL.
  10. Proficient in English.
  11. Have knowledge of a MLOps (MLflow) is a plus
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  13  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture Bitbucket Data pipelines Docker Elasticsearch Engineering ETL Git Kafka Kubernetes Machine Learning MLFlow MLOps OpenSearch Pipelines Python Robotics RPA Spark SQL

Region: Asia/Pacific
Country: Indonesia

More jobs like this