Technical Lead – AI & Data Warehouse (DWH)

Chennai, India

Apply now Apply later

Technical Lead – AI & Data Warehouse (DWH)

Pando is a global leader in supply chain technology, building the world's quickest time-to-value Fulfillment Cloud platform. Pando’s Fulfillment Cloud provides manufacturers, retailers, and 3PLs with a single pane of glass to streamline end-to-end purchase order fulfillment and customer order fulfillment to improve service levels, reduce carbon footprint, and bring down costs. As a partner of choice for Fortune 500 enterprises globally, with a presence across APAC, the Middle East, and the US, Pando is recognized as a Technology Pioneer by the World Economic Forum (WEF), and as one of the fastest growing technology companies by Deloitte.

Role
As the Senior Lead for AI and Data Warehouse at Pando, you will be responsible for building and scaling the data and AI services team. You will drive the design and implementation of highly scalable, modular, and reusable data pipelines, leveraging big data technologies and low-code implementations. This is a senior leadership position where you will work closely with cross-functional teams to deliver solutions that power advanced analytics, dashboards, and AI-based insights.

Key Responsibilities
• Lead the development of scalable, high-performance data pipelines using PySpark or Big Data ETL pipeline technologies.
• Drive data modeling efforts for analytics, dashboards, and knowledge graphs.
• Oversee the implementation of parquet-based data lakes.
• Work on OLAP databases, ensuring optimal data structure for reporting and querying.
• Architect and optimize large-scale enterprise big data implementations with a focus on modular and reusable low-code libraries.
• Collaborate with stakeholders to design and deliver AI and DWH solutions that align with business needs.
• Mentor and lead a team of engineers, building out the data and AI services organization.



Requirements

• 8-10 years of experience in big data and AI technologies, with expertise in PySpark or similar Big Data ETL pipeline technologies.
• Strong proficiency in SQL and OLAP database technologies.
• Firsthand experience with data modeling for analytics, dashboards, and knowledge graphs.
• Proven experience with parquet-based data lake implementations.
• Expertise in building highly scalable, high-volume data pipelines.
• Experience with modular, reusable, low-code-based implementations.
• Involvement in large-scale enterprise big data implementations.
• Initiative-taker with strong motivation and the ability to lead a growing team.

Preferred
• Experience leading a team or building out a new department.
• Experience with cloud-based data platforms and AI services.
• Familiarity with supply chain technology or fulfilment platforms is a plus.
Join us at Pando and lead the transformation of our AI and data services, delivering innovative solutions for global enterprises!




Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Big Data Data pipelines Data warehouse ETL OLAP Parquet Pipelines PySpark SQL

Region: Asia/Pacific
Country: India

More jobs like this