Data Warehouse Engineer
Chennai, India
Pando
Global manufacturers & retailers trust Pando to control costs, enhance customer experience, and reduce emissions. Pando's AI agents eliminate manual work freeing up time for logistics teams to focus on strategic priorities and create a...Pando is a global leader in
supply chain technology, building the world's quickest time-to-value
Fulfillment Cloud platform. Pando’s Fulfillment Cloud provides
manufacturers, retailers, and 3PLs with a single pane of glass to
streamline end-to-end purchase order fulfillment and customer order fulfillment
to improve service levels, reduce carbon footprint, and bring down costs.
As a partner of choice for Fortune 500 enterprises globally, with a
presence across APAC, the Middle East, and the US, Pando is recognized as a
Technology Pioneer by the World Economic Forum (WEF), and as one of the
fastest growing technology companies by Deloitte.
Role Overview
As a Junior Data Warehouse
Engineer at Pando, you’ll work within the Data & AI Services team to
support the design, development, and maintenance of data pipelines and
warehouse solutions. You'll collaborate with senior engineers and
cross-functional teams to help deliver high-quality analytics and reporting
solutions that power key business decisions. This is an excellent opportunity
to grow your career by learning from experienced professionals and gaining
hands-on experience with large-scale data systems and supply chain
technologies.
Key Responsibilities
- Assist in building and maintaining
scalable data pipelines using tools like PySpark and SQL-based ETL
processes.
- Support the development and maintenance
of data models for dashboards, analytics, and reporting.
- Help manage parquet-based data lakes and
ensure data consistency and quality.
- Write optimized SQL queries for OLAP
database systems and support data integration efforts.
- Collaborate with team members to
understand business data requirements and translate them into technical
implementations.
- Document workflows, data schemas, and
data definitions for internal use.
- Participate in code reviews, team
meetings, and training sessions to continuously improve your skills
Requirements
- 2–4 years of experience working with
data engineering or ETL tools (e.g., PySpark, SQL, Airflow).
- Solid understanding of SQL and basic
experience with OLAP or data warehouse systems.
- Exposure to data lakes, preferably using
Parquet format.
- Understanding of basic data modeling
principles (e.g., star/snowflake schema).
- Good problem-solving skills and a
willingness to learn and adapt.
- Ability to work effectively in a
collaborative, fast-paced team environment.
Preferred Qualifications
- Experience working with cloud platforms
(e.g., AWS, Azure, or GCP).
- Exposure to low-code data tools or
modular ETL frameworks.
- Interest or prior experience in the
supply chain or logistics domain.
- Familiarity with dashboarding tools like
Power BI, Looker, or Tableau.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow AWS Azure Data pipelines Data warehouse Engineering ETL GCP Looker OLAP Parquet Pipelines Power BI PySpark Snowflake SQL Tableau
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.