ETL Developer - Data & Analytics
IND - Pune, India
CANPACK Group
It’s our vision to push the boundaries of what’s possible with packaging. Whatever experience you want to create, we’re here to help you create that feeling.Giorgi Global Holdings, Inc. (“GGH”) is a privately held, diversified consumer products/packaging company with approximately 11,000 employees and operations in 20 countries. GGH consists of four US based companies (The Giorgi Companies) and one global packaging company (CANPACK).
GGH has embarked on a transformation journey to become a digital, technology enabled, customer-centric, data and insights-driven organization. This transformation is evolving our business, strategy, core operations and IT solutions.
As an ETL Developer, you will be an integral part of our Data and Analytics team, working closely with the ETL Architect and other developers to design, develop, and maintain efficient data integration and transformation solutions. We are looking for a highly skilled ETL Developer with a deep understanding of ETL processes and data warehousing. The ideal candidate is passionate about optimizing data extraction, transformation, and loading workflows, ensuring high performance, accuracy, and scalability to support business intelligence initiatives.
What you will do:
1. Design, develop, test and maintain ETL processes and data pipelines to support data integration and transformation needs.
2. Continuously improve ETL performance and reliability through best practices and optimization techniques.
3. Develop and implement data validation and quality checks to ensure the integrity and consistency of data.
4. Collaborate with ETL Architect, Data Engineers, and Business Intelligence teams to understand business requirements and translate them into technical solutions.
5. Monitor, troubleshoot, and resolve ETL job failures, performance bottlenecks, and data discrepancies.
6. Proactively identify and resolve ETL-related issues, minimizing impact on business operations.
7. Contribute to documentation, training, and knowledge sharing to enhance team capabilities.
8. Communicate progress and challenges clearly to both technical and non-technical teams
Essential Requirements:
Bachelor’s or master’s degree in information technology, Computer Science, or a related field.
3-5 years of relevant experience.
Power-BI, Tabular Editor/Dax Studio, ALM/Github/Azure Devops skills
Exposure to SAP Systems/Modules like SD, MD, etc. to understand functional data.,
Exposure to MS Fbric, MS Azure Synapse Analytics
Competencies needed:
- Hands-on experience with ETL development and data integration for large-scale systems
- Experience with platforms such as Synapse Analytics, Azure Data Factory, Fabric, Redshift or Databricks
- A solid understanding of data warehousing and ETL processes
- Advanced SQL and PL/SQL skills such as query optimization, complex joins, window functions
- Expertise in Python (pySpark) programming with a focus on data manipulation and analysis
- Experience with Azure DevOps and CI/CD process
- Excellent problem-solving and analytical skills
- Experience in creating post-implementation documentation
- Strong team collaboration skills
- Attention to detail and a commitment to quality
Strong interpersonal skills including analytical thinking, creativity, organizational abilities, high commitment, initiative in task execution, and a fast-learning capability for understanding IT concepts
If you are a current CANPACK employee, please apply through your Workday account.
CANPACK Group is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, age, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by law or not related to job requirements, unless such distinction is required by law.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Azure Business Intelligence CI/CD Computer Science Databricks Data pipelines Data Warehousing DevOps ETL GitHub Pipelines PySpark Python Redshift SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.