Data Engineer

PK-Islamabad-345

Telenor

Trenger du mobiltelefon, mobilabonnement, TV eller raskere bredbånd? Sjekk priser på mobiler, abonnement og tjenester fra Telenor.

View all jobs at Telenor

Apply now Apply later

Job Description 1: Data Engineer

Application deadline 27th Oct 2024

Position Title: Data Engineer

Reporting to: Amena Shakil

Job Group: 2/3

Location: Pakistan

Division: TSS Pakistan

Department/Unit:  Automations/Integration Hub

Position Related

A Data Engineer is responsible for designing, constructing, and maintaining scalable data pipelines, catalogues and architectures to support data-driven decision-making within the organization. The role involves transforming raw data into a format suitable for analysis and integrating it into data storage solutions.

A Data Engineer specializing in Python is responsible for building and managing data pipelines, transforming data into actionable insights, and ensuring data integrity across various platforms. This role requires a deep understanding of data architecture, ETL processes, and the ability to work with large datasets.

Goals

  • Develop and maintain robust data pipelines to collect, process, and analyze large volumes of data from various sources.
  • Design and implement data storage solutions, including data fabric, lakes and warehouses, ensuring optimal performance and security.
  • Collaborate with data analysts to understand data needs and provide support for data access and analysis.
  • Monitor data pipelines for performance and reliability, implementing necessary optimizations and troubleshooting issues as they arise.
  • Ensure data quality and integrity by implementing data validation and cleansing processes.
  • Document data architecture, pipeline processes, and data lineage to ensure clarity and compliance with best practices.
  • Stay updated with industry trends and emerging technologies to continuously improve data engineering practices.
  • Design and implement scalable data pipelines using Python to extract, transform, and load data from multiple sources.
  • Monitor and troubleshoot data pipeline performance, implementing improvements as needed.

Eligibility Criteria

1. Education: bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Experience / Skills

  • 5-7 years of experience in data engineering, data architecture, or a related field.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Strong experience with data storage technologies such as SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra).
  • Familiarity with ETL tools and data integration frameworks (e.g., Apache Spark, Apache NiFi, Airflow, Microsoft data fabric etc;).
  • Knowledge of cloud platforms (AWS, Azure, GCP) and data-related services (e.g., AWS Redshift, Azure Data Lake).
  • Understanding of data modeling, data warehousing concepts, and data governance.
  • Excellent problem-solving skills and the ability to work independently and collaboratively.
  • Strong communication skills to effectively collaborate with cross-functional teams.
Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Azure Cassandra Computer Science Data governance Data pipelines Data quality Data Warehousing Engineering ETL GCP Java MongoDB MySQL NiFi NoSQL Pipelines PostgreSQL Python Redshift Scala Security Spark SQL

Region: Asia/Pacific
Country: Pakistan

More jobs like this