Data Engineer (Flink, Kafka)
Bengaluru, Karnataka, India - Remote
FairMoney
Digital banking and Instant Loans in Nigeria providing collateral-free personal loans, a bank account with free bank transfers, and zero convenience fee on b...FairMoney is a pioneering mobile banking institution specializing in extending credit to emerging markets. Established in 2017, the company currently operates primarily within Nigeria, and it has secured nearly €50 million in funding from renowned global investors, including Tiger Global, DST, and Flourish Ventures. FairMoney maintains a strong international presence, with offices in several countries, including France, Nigeria, Germany, Latvia, the UK, Türkiye, and India.
In alignment with its vision, FairMoney is actively constructing the foremost mobile banking platform and point-of-sale (POS) solution tailored for emerging markets. The journey began with the introduction of a digital microcredit application exclusively available on Android and iOS devices. Today, FairMoney has significantly expanded its range of services, encompassing a comprehensive suite of financial products, such as current accounts, savings accounts, debit cards, and state-of-the-art POS solutions designed to meet the needs of both merchants and agents.
We are building Engineering centres of excellence across multiple regions and are looking for smart, talented, driven engineers. This is a unique opportunity to be part of the core engineering team of a fast-growing fintech poised for more rapid growth in the coming years.
To gain deeper insights into FairMoney's pivotal role in reshaping Africa's financial landscape, we invite you to watch this informative video.
Role and responsibilities
We are seeking a motivated and detail-oriented Junior Data Engineer to join our dynamic team. The ideal candidate will have 2-5 years of experience in the IT industry, with a strong foundation in Python programming and experience in processing real-time data streams. Familiarity with technologies such as Kafka, Apache Flink, and Apache Spark is essential.
Key Responsibilities:
- Develop and maintain data pipelines to process real-time data streams.
- Write efficient and reusable Python code for data processing tasks.
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
- Implement and manage data ingestion processes using Kafka.
- Utilize Apache Flink or Apache Spark for data processing and transformation tasks.
- Monitor and optimize the performance of data pipelines.
- Ensure data quality and integrity throughout the processing lifecycle.
Requirements
- Experience: 2-5 years in the IT industry, with a focus on data engineering.
- Programming Skills: Python proficiency; experience with writing clean, efficient, and maintainable code.
- Data Processing: Hands-on experience with Kafka for handling real-time data streams.
- Frameworks: Familiarity with Apache Flink or Apache Spark; knowledge of Flink CDC is a plus.
- Cloud Technologies: Exposure to AWS services, particularly S3 for data storage.
- Database Knowledge: Experience with BigQuery is advantageous.
Benefits
- Training & Development
- Family Leave (Maternity, Paternity)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Remote Work
Recruitment Process
- A screening interview with one of the members of the Talent Acquisition team ~30 minutes.
- Assignment to be done at home
- Technical interview ~ 60 minutes
- Final Interview with - Head of Data Engineering ~ 60 minutes
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: AWS Banking BigQuery Data pipelines Data quality Engineering FinTech Flink Kafka Pipelines Python Spark
Perks/benefits: Parental leave
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.