Technical Lead - Data Fabric
Gurgaon
Srijan Technologies
Your trusted Drupal partner Srijan (now as Material) continues to help brands drive digital transformation through data, AI, Cloud and platform engineering.Location: Gurgaon,None,None
About us
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe
Role Overview
As a Technical Lead, you will be responsible for modernizing and unifying data pipelines across multiple supply chain assets (DPO, IM Pro, Control Tower, VMX) onto a single platform powered by Microsoft Fabric. You will work in cross-functional agile PODs to develop scalable data workflows, integrate real-time data sources, and ensure analytics-ready datasets for advanced reporting and ML use cases.
Key Responsibilities
- Develop and maintain robust ETL/ELT pipelines using PySpark and Fabric Lakehouse (OneLake)
- Collaborate with cross-asset PODs to unify data models and workflows
- Enable ingestion from PostgreSQL, APIs, event streams, and third-party sources
- Collaborate with frontend (React, Power BI) and backend (Python) developers to align data layer with UI and API needs
- Implement data observability, validation, and lineage tracking
- Apply Fabric optimization techniques for performance and cost-efficiency
- Contribute to DataOps processes including CI/CD, automated testing (PyTest), and pipeline orchestration
- Partner with Power BI developers and ML engineers to deliver integrated insights
- Maintain high data quality, schema consistency, and governance through Microsoft Purview
Skills & Qualifications
- 5 – 8 years of experience in Data Engineering; 2+ years in Azure Synapse and 1+ years in Microsoft Fabric
- Strong coding skills in Python, PySpark, SQL
- Proficiency in working with Delta tables, Lakehouse architecture, and OneLake
- Experience integrating with Power BI datasets and models
- Familiarity with CI/CD (Azure DevOps/GitHub Actions), testing frameworks (PyTest)
- Understanding of Front End and Back End Development & Testing
- Understanding of AI/ML model lifecycle is a plus
- Excellent communication and cross-team collaboration skills
Preferred Certifications (At least one of the below):
- Databricks Certified Data Engineer Associate
- Microsoft Certified: Fabric Data Engineer Associate (DP-700)
- AWS Certified Data Engineer – Associate
- Microsoft Certified: Azure Data Engineer Associate (DP-203)
What We Offer
- Professional Development and Mentorship.
- Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
- Health and Family Insurance.
- 40+ Leaves per year along with maternity & paternity leaves.
- Wellness, meditation and Counselling sessions.
Apply to this job
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs Architecture AWS Azure CI/CD Databricks DataOps Data pipelines Data quality DevOps ELT Engineering ETL GitHub Machine Learning Pipelines PostgreSQL Power BI PySpark Python React SQL Testing
Perks/benefits: Career development Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.