Data Engineer
India, Ahmedabad
ZURU
ZURU is on a quest to reimagine tomorrow, now spanning three core divisions — ZURU Toys, ZURU Edge and ZURU Tech.📌Be responsible for developing and maintaining complex data pipelines, ETL, data models, and standards for various data Integration & data warehousing projects from source to sinks on Saas/Paas platforms - snowflake, databricks, Azure Synapse, Redshift, BigQuery, etc.📌Develop scalable, secure, and optimized data transformation pipelines and integrate them with downstream sinks.📌Provide support to technical solutions from a data flow design and architecture perspective, ensure the right direction, and propose a resolution to potential data pipeline-related problems. 📌Participate in developing Proof of concepts (PoC) of key technology components to project stakeholders. 📌Developing scalable and reusable frameworks for ingesting geospatial data sets.📌Develop connectors to extract data from sources and use event/streaming services to persist and process data into sinks. 📌Collaborate with other project team members (Architects, Sr Data Engineers) to support the delivery of additional project components (like API interfaces, Search, and visualization).📌Participate in evaluating and creating PoVs around the performance aspects of data integration tools in the market against customer requirements. 📌Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. What are we Looking for?
✔ Must have 3+ years of experience working as a Data Engineer in cloud transformation projects in the areas of data management solutions - AWS or Azure or GCP or Snowflake, Databricks, etc.✔ Must have hands-on experience in at least one end-to-end implementation of data lake/warehouse projects using Paas and Saas - such as Snowflake, Databricks, Redshift, Synapse, BigQuery, etc. and 6 month+ data warehouse/data lake implementations on-premise.✔ Experience in data pipeline development, ETL/ELT, implementing complex stored Procedures and standard DWH and ETL concepts.✔ Experience in setting up resource monitors, RBAC controls, warehouse sizing, query performance tuning, IAM policies, and Cloud networking (VPC, Virtual Network etc).✔ Experience in Data Migration from on-premise RDBMS to cloud data warehouses✔ Good understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling).✔ Hands-on experience in Python, PySpark, and programming for data integration projects.✔ Good experience in Cloud data storage (Blob, S3, Object stores, Minio), data pipeline services, data integration services, and data visualization.✔ Support in providing resolution to an extensive range of complicated data pipeline-related problems, proactively and as issues surface.
What do we Offer? 💰 Competitive compensation 💰 Annual Performance Bonus ⌛️ 5 Working Days with Flexible Working Hours 🌎 Annual trips & Team outings 🚑 Medical Insurance for self & family 🚩 Training & skill development programs 🤘🏼 Work with the Global team, Make the most of the diverse knowledge 🍕 Several discussions over Multiple Pizza Parties
A lot more! Come and discover us!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs Architecture AWS Azure BigQuery Databricks Data management Data pipelines Data visualization Data warehouse Data Warehousing DevOps ELT ETL GCP NoSQL Pipelines PySpark Python RDBMS Redshift Snowflake Streaming
Perks/benefits: Competitive pay Flex hours Salary bonus Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.