Data Engineer

Bugolobi Branch, Uganda

Absa Group

Absa Group offers personal, business, and wealth banking services across Africa. Manage finances securely and achieve your goals with trusted solutions.

View all jobs at Absa Group

Apply now Apply later

Empowering Africa’s tomorrow, together…one story at a time.

With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.

My Career Development Portal: Wherever you are in your career, we are here for you. Design your future. Discover leading-edge guidance, tools and support to unlock your potential. You are Absa.Ā You are possibility.

Job Summary

Responsible for designing and maintaining secure, scalable ETL pipelines that integrate data from various banking systems, while managing data warehouses and lakes to ensure efficient storage, backup, and replication. Will support regulatory compliance through automated reporting and real-time processing for fraud detection and collaborate with analysts and data scientists to deliver clean, high-quality data. The role is grounded in strong data governance and architecture principles, ensuring that all systems are aligned, reliable, and optimized for performance and compliance.

Job Description

Accountability: Ā Data Pipeline & Integration – 30%

  • Design and implement automatedĀ ETL (Extract, Transform, Load)Ā pipelines to collect data from core banking systems, mobile apps, ATMs, and third-party APIs.

  • Standardize and transform raw data into consistent formats for downstream systems.

  • Ensure secure, encrypted data transfer and enforce access controls to protect sensitive financial information.

  • Contribute to theĀ data architectureĀ by defining how data flows across systems, ensuring scalability, modularity, and maintainability.

Accountability: Ā Data Warehousing & Management – 25%

  • Build and manageĀ data warehousesĀ andĀ data lakesĀ to store structured and unstructured data efficiently.

  • Apply data modeling techniques and optimize storage using indexing, partitioning, and compression.

  • Implement data lifecycle management, including retention, archival, and deletion policies.

  • Set upĀ data backup and replication strategiesĀ to ensure high availability, disaster recovery, and business continuity.

  • Align storage solutions with the bank’sĀ enterprise data architecture, ensuring compatibility with analytics, reporting, and compliance systems.

Accountability: Ā Compliance & Real-Time Processing – 25%

  • Automate data preparation for regulatory reporting (e.g., KYC, AML, Basel III) using governed ETL workflows.

  • Build real-time data processing systems using tools like Apache Kafka or Spark Streaming for fraud detection and transaction monitoring.

  • Ensure data lineage, auditability, and traceability to support compliance audits and internal controls.

  • Design real-time processing components as part of the broaderĀ data architecture, ensuring they integrate seamlessly with batch systems and reporting tools.

Accountability: Ā Collaboration, Data Quality & Governance – 20%

  • Work with data scientists and analysts to deliver clean, reliable datasets for modeling and reporting.

  • Apply validation rules, anomaly detection, and monitoring to maintain high data quality across ETL pipelines.

  • Maintain metadata catalogs, data dictionaries, and lineage tracking to support transparency and governance.

  • Collaborate with data stewards and architects to enforceĀ data governance policiesĀ and ensure alignment with the bank’s overall data strategy.

Role/person specification:

Preferred Education

  • Bachelor’s degree in Computer Science, Software Engineering, Information Technology, Data Science, Computer Engineering, Mathematics, Statistics, or a related field. (Master is an added advantage)

  • Relevant professional certifications in data engineer like, Google Cloud Data Engineer, Azure Data Engineer (DP-203), AWS Data Analytics Specialty, Databricks Data Engineer, Snowflake, Kafka, Kubernetes, Analytics, Machine Learning, Artificial Intelligence and Cloud Platforms (GCP, AWS, Azure) are considered added advantages

Preferred Experience

  • •At least 3-5 years’ experience in working on building data pipelines, working with big data and cloud platform, managing real-time and warehouse data systems, and collaborating with cross-functional teams.

  • Financial domain knowledge is an added advantage

Knowledge and Skills

  • Technical Proficiency: Skilled in data modeling, ETL/ELT, big data tools, programming (Python, R, SQL), data visualization, and cloud platforms.

  • Analytical & Problem-Solving: Able to manage complex datasets, optimize pipelines, and ensure data quality.

  • Communication & Collaboration: Effective in documenting workflows and working with cross-functional teams.

Education

Bachelor's Degree: Information Technology (Required)
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index šŸ’°

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture AWS Azure Banking Big Data Computer Science Data Analytics Databricks Data governance Data pipelines Data quality Data strategy Data visualization Data Warehousing ELT Engineering ETL GCP Google Cloud Kafka Kubernetes Machine Learning Mathematics Pipelines Python R Snowflake Spark SQL Statistics Streaming Unstructured data

Perks/benefits: Career development Transparency

Region: Africa
Country: Uganda

More jobs like this