Lead Data Engineer

Banglore, India

iLink Digital

iLink Digital provides digital transformation services, consulting services and solutions to transform your business & to achieve digital goals. Visit us, today!

View all jobs at iLink Digital

Apply now Apply later

About The Company:


iLink Digital is a Global Software Solution Provider and Systems Integrator, delivers next-generation technology solutions to help clients solve complex business challenges, improve organizational effectiveness, increase business productivity, realize sustainable enterprise value and transform your business inside-out. iLink integrates software systems and develops custom applications, components, and frameworks on the latest platforms for IT departments, commercial accounts, application services providers (ASP) and independent software vendors (ISV). iLink solutions are used in a broad range of industries and functions, including healthcare, telecom, government, oil and gas, education, and life sciences. iLink’s expertise includes Cloud Computing & Application Modernization, Data Management & Analytics, Enterprise Mobility, Portal, collaboration & Social Employee Engagement, Embedded Systems and User Experience design etc.

 

What makes iLink's offerings unique is the fact that we use pre-created frameworks, designed to accelerate software development and implementation of business processes for our clients. iLink has over 60 frameworks (solution accelerators), both industry-specific and horizontal, that can be easily customized and enhanced to meet your current business challenges.



Requirements

We are seeking a highly experienced Lead Data Engineer to design, develop, optimize, and maintain robust data pipelines for large-scale data processing. The ideal candidate should have deep expertise in Python, Scala, Spark, Kafka, Kubernetes, SQL administration, and database management (Postgres, Redshift). This role requires hands-on experience in CI/CD pipeline creation and maintenance, along with strong problem-solving skills and the ability to lead data engineering initiatives.

Key Responsibilities:

  • Develop & Optimize Data Pipelines – Architect, build, and enhance scalable data pipelines for high-performance processing.
  • Troubleshoot & Sustain – Identify, diagnose, and resolve data pipeline issues to ensure operational efficiency.
  • Data Architecture & Storage – Design efficient data storage and retrieval strategies using Postgres, Redshift, and other databases.
  • CI/CD Pipeline Management – Implement and maintain continuous integration and deployment strategies for smooth workflow automation.
  • Scalability & Performance Tuning – Ensure the robustness of data solutions while optimizing performance at scale.
  • Collaboration & Leadership – Work closely with cross-functional teams to ensure seamless data flow and lead engineering best practices.
  • Security & Reliability – Establish governance protocols and ensure data integrity across all pipelines.

Technical Skills Required:

  • Programming: Expert in Python and Scala
  • Big Data Technologies: Proficient in Spark, Kafka
  • DevOps & Cloud Infrastructure: Strong understanding of Kubernetes
  • SQL & Database Management: Skilled in SQL administration, Postgres, Redshift
  • CI/CD Implementation: Experience in automating deployment processes for efficient workflow


Benefits

  • Competitive salaries
  • Medical Insurance
  • Employee Referral Bonuses
  • Performance Based Bonuses
  • Flexible Work Options & Fun Culture
  • Robust Learning & Development Programs
  • In-House Technology Training


Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Architecture Big Data CI/CD Data management Data pipelines DevOps Engineering Kafka Kubernetes Pipelines PostgreSQL Python Redshift Scala Security Spark SQL

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this