Data Engineer
StarHub Green
StarHub
StarHub Personal - Check out our new offerings & promos. View our latest phones, broadband plans, and rewards by redeeming your points.About the Team:
The Customer Lifecycle Management (CLM) team at StarHub is dedicated to understanding, enhancing, and optimizing the customer journey. From acquisition to retention, the CLM team employs data-driven strategies to provide unparalleled customer experiences. Through a combination of data science, business intelligence, customer insights, NPS, and digital analytics, the CLM team ensures that StarHub's offerings are aligned with customer needs, leading to increased loyalty, satisfaction, and growth.
Job Description
The Data Engineer plays a crucial role in the CLM team by designing, implementing, and maintaining the data infrastructure that supports the team's analytics and data science initiatives. This position is responsible for developing and optimizing data pipelines, ensuring data quality and accessibility, and collaborating with data scientists and analysts to enable efficient data-driven decision-making. The Data Engineer will work on integrating data from various sources, implementing data governance practices, and creating scalable solutions that support the CLM team's objectives in enhancing customer experiences and driving business growth.
Key Responsibilities:
- Data Pipeline Development : Design, implement, and maintain efficient ETL (Extract, Transform, Load) processes to integrate data from various sources. Optimize existing data pipelines for improved performance and scalability.
- Data Warehouse Management: Develop and maintain the data warehouse architecture, ensuring it meets the needs of the CLM team. Implement data modeling techniques to optimize data storage and retrieval.
- Data Quality Assurance: Implement data quality checks and monitoring systems to ensure the accuracy and reliability of data used in analytics and reporting. Develop and maintain data documentation and metadata.
- Big Data Technologies: Utilize big data technologies (e.g., Hadoop, Spark) to process and analyze large volumes of customer data efficiently. Implement solutions for real-time data processing when required.
- Data Governance: Collaborate with relevant stakeholders to implement data governance policies and procedures. Ensure compliance with data privacy regulations and internal data management standards.
- Infrastructure Optimization: Continuously assess and optimize the data infrastructure to improve performance, reduce costs, and enhance scalability. Implement automation solutions to streamline data processes.
Qualifications
Education Level:
- Bachelor's degree in Computer Science, Information Systems, Data Engineering, or a related field. Master's degree in a relevant field is preferred.
- Required Experience and Knowledge
- 3-5 years of experience in data engineering or a related field.
- Strong knowledge of data warehouse concepts, ETL processes, and data modeling techniques.
- Experience with cloud-based data platforms (e.g., AWS, SnowFlake).
- Proficiency in SQL and experience with NoSQL databases.
- Experience with big data technologies such as Hadoop, Spark, or Kafka.
- Knowledge of data governance principles and data privacy regulations.
Job-Specific Technical Skills:
- Proficiency in Python or Scala for data processing and automation.
- Experience with ETL tools (e.g., Apache NiFi, Talend, Informatica).
- Knowledge of data visualization tools (e.g., Tableau, PowerBI) to support data quality checks and pipeline monitoring.
- Familiarity with version control systems (e.g., Git) and CI/CD practices.
- Experience with container technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Understanding of data security best practices and implementation.
Behavioural Skills:
- Strong problem-solving and analytical skills.
- Excellent communication abilities to collaborate with technical and non-technical team members.
- Proactive approach to identifying and resolving data-related issues.
- Ability to manage multiple projects and priorities effectively.
- Detail-oriented with a focus on data quality and system reliability.
- Adaptability to work with evolving technologies and changing business requirements.
- Strong teamwork skills and ability to work in a collaborative environment.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Big Data Business Intelligence CI/CD Computer Science Data governance Data management Data pipelines Data quality Data visualization Data warehouse Docker Engineering ETL Git Hadoop Informatica Kafka Kubernetes NiFi NoSQL Pipelines Power BI Privacy Python Scala Security Snowflake Spark SQL Tableau Talend
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.