Junior Data Engineer

Singapore, Singapore, Singapore

Apply now Apply later

We are seeking a skilled and motivated Data Engineer with expertise in Hadoop, Spark, OpenShift Container Platform (OCP), and DevOps practices. As a Data Engineer, you will be responsible for designing, developing, and maintaining efficient data pipelines, processing large-scale datasets. Your expertise in Hadoop, Spark, OCP, and DevOps will be crucial in ensuring the availability, scalability, and reliability of our ML Solutions.

Your main responsibilities will include:

  • Implement data transformation, aggregation, and enrichment processes to support various data analytics and machine learning initiatives
  • Collaborate with cross-functional teams to understand data requirements and translate them into effective data engineering solutions
  • Ensure data quality and integrity throughout the data processing lifecycle
  • Design and deploy data engineering solutions on OpenShift Container Platform (OCP) using containerization and orchestration techniques
  • Optimize data engineering workflows for containerized deployment and efficient resource utilization
  • Collaborate with DevOps teams to streamline deployment processes, implement CI/CD pipelines, and ensure platform stability
  • Implement data governance practices, data lineage, and metadata management to ensure data accuracy, traceability, and compliance
  • Monitor and optimize data pipeline performance, troubleshoot issues, and implement necessary enhancements
  • Implement monitoring and logging mechanisms to ensure the health, availability, and performance of the data infrastructure
  • Document data engineering processes, workflows, and infrastructure configurations for knowledge sharing and reference
  • Stay updated with emerging technologies, industry trends, and best practices in data engineering and DevOps
  • Provide technical leadership, mentorship, and guidance to junior team members to foster a culture of continuous learning and innovation to the continuous improvement of the analytics capabilities within the bank

Requirements

  • ·Bachelor's degree in Computer Science, Information Technology, or a related field
  • Proven experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments
  • Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
  • Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
  • Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
  • Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
  • Experience with Graphana, Prometheus, Splunk will be an added benefit
  • Strong problem-solving and troubleshooting skills with a proactive approach to resolving technical challenge
  • Ability to manage multiple priorities, meet deadlines, and deliver high-quality results in a fast-paced environment
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Ansible Bitbucket CI/CD Computer Science Data Analytics Data governance Data pipelines Data quality DevOps Docker Engineering Hadoop HDFS Java Jenkins Kubernetes Machine Learning Pipelines Python Scala Spark Splunk

Region: Asia/Pacific
Country: Singapore

More jobs like this