Senior / DataOps Engineer

StarHub Green

StarHub

StarHub Personal - Check out our new offerings & promos. View our latest phones, broadband plans, and rewards by redeeming your points.

View all jobs at StarHub

Apply now Apply later

JOB PURPOSE

  • The Data Platform Team is responsible for designing, implementing, and managing a modern data platform that embraces the principles of data mesh, empowering teams to create and manage their own data products. Our mission is to deliver high-quality, scalable data solutions that drive business value across the organization.
  • As a key member of this team, you will play a critical role in ensuring the reliability, scalability, and efficiency of our data infrastructure. Your focus will be on enabling seamless data flow and analytics engines, supporting the creation and management of data products that align with business needs.
  • In this role, you will work closely with data engineers and data stewards to design, automate, and optimize data operations, with a strong emphasis on cloud infrastructure and containerization. Your mission is to ensure seamless and scalable data services by implementing robust pipelines, leveraging cloud platforms, and managing containerized environments. You will drive efficiency through automation and continuous improvement, integrating the latest tools and practices to support analytics and GenAI use cases.

 

KEY RESPONSIBILITIES

  • Collaborate with solution architects and the infrastructure team to design and implement cloud-based and on-prem architectures, managing infrastructure across AWS and OpenShift Container Platform (OCP).
  • Deploy and manage containerized applications, utilizing Red Hat OpenShift on both AWS and on-prem environments.
  • Design and maintain scalable data pipelines and services, along with the frameworks and workflows for data ingestion and ETL processes. Implement data orchestration to ensure seamless data flow across the platform.
  • Automate infrastructure deployment using Ansible, Terraform, and CLI tools, streamlining the provisioning and management of resources to support efficient data platform operations and data product delivery.
  • Implement and manage security measures, including IAM roles, policies, Security Groups, and Network ACLs, to protect data and infrastructure across cloud and on-prem platforms.
  • Set up and maintain monitoring systems to optimize performance and ensure reliability in both cloud and on-prem setups.
  • Establish and manage CI/CD pipelines for continuous integration, testing, and deployment of data platform components across hybrid environments.
  • Maintain comprehensive documentation of data infrastructure and processes, ensuring all procedures are well-documented and accessible.
  • Offer training and support to internal and external team members on DataOps practices, tools, and processes to ensure consistent and effective use of the data platform.

 

Qualifications

Requirement:

  • Degree in IT, Computer Science, Data Analytics or related field
  • 2 to 4 years of experience in Data Engineering, DevOps, or related fields.
  • Proven experience working in a mature, DevOps-enabled environment with well-established cloud practices, demonstrating the ability to operate in a high-performing, agile team.
  • Familiarity with cloud platforms (AWS, AliCloud, GCP) and experience managing infrastructure across public cloud and on-prem environments, particularly with OpenShift Container Platform (OCP).
  • Practical experience using automation tools such as Ansible, Terraform, and CLI tools for provisioning, configuring, and managing infrastructure across hybrid environments.
  • Competence in designing and implementing data ingestion, ETL frameworks, dashboard ecosystems, and data orchestration to ensure seamless data flow and integration for large-scale datasets.
  • Hands-on experience with Linux systems, Object Storage, Spark, and Presto query engines, and the ability to translate functional specifications into design specifications.
  • Working knowledge of CI/CD best practices, with experience in setting up and managing CI/CD pipelines for continuous integration, testing, and deployment.
  • Experience in implementing security measures, including IAM roles, policies, Security Groups, and Network ACLs, to protect data across cloud and on-prem platforms.
  • Familiarity with monitoring and optimization tools such as AWS CloudWatch, Prometheus, and Grafana to help ensure performance and reliability in data systems.
  • Good problem-solving and communication skills, especially in explaining technical concepts to non-technical data users, and the ability to collaborate effectively with data engineers, data stewards, and other stakeholders.
  • Ability to maintain clear documentation of data infrastructure and processes, with some experience in providing training on DataOps practices to team members.

Preferred:

  • Certifications in cloud technology platforms (such as cloud architecture, container platforms, systems, and/or network virtualization).
  • Knowledge of telecom networks, including mobile and fixed networks, will be an added advantage.
  • Familiarity with data fabric and data mesh concepts, including their implementation and benefits in distributed data environments, is a bonus.
     
Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Ansible Architecture AWS CI/CD Computer Science Data Analytics DataOps Data pipelines DevOps Engineering ETL GCP Generative AI Grafana Linux Pipelines Security Spark Terraform Testing

Perks/benefits: Salary bonus

More jobs like this