Senior Data Engineer (immediate role)

India - Remote

Zealogics

Digital transformation and Engineering services company working with Fortune 500 customers across Industries for a Sustainable Tomorrow.

View all jobs at Zealogics

Apply now Apply later

Responsibilities
  • Able to participate in business discussions and assist  gathering data requirements.  Good analytical and problem-solving skills to help address data challenges.    
  • Proficiency in writing complex SQL queries for data extraction, transformation, and analysis.  Knowledge of SQL functions, joins, subqueries, and performance tuning.  Able to navigate source systems with minimal guidance to understand how data is related and use like data profiling to gain a better understanding of the data. Hands on experience with PySQL/Pyspark etc.  
  • Hands on Experience in creating and managing data pipelines using Azure Data Factory.  Understanding of data integration, transformation, and workflow orchestration in Azure environments.   
  • Knowledge of data engineering workflows and best practices in Databricks.  Able to understand existing templates and patterns for development. Hands on experience with Unity Catalog and Databricks workflow.  
  • Proficiency in using Git for version control and collaboration in data projects.  Ability to work effectively in a team environment, especially in agile or collaborative settings. 
  • Clear and effective communication skills to articulate findings and recommendations for other team members.  Ability to document processes, workflows, and data analysis results effectively. 
  • Willingness to learn new tools, technologies, and techniques as the field of data analytics evolves.  Being adaptable to changing project requirements and priorities. 
Skills
  • 6+ years of overall experience with expertise in Azure technologies
  • Azure Databricks, Data Lakehouse architectures, and Azure Data Factory.
  • Expertise in optimizing data workflows and predictive modeling.
  • Designing and implementing data pipelines using Databricks, Spark,
  • Expertise in batch and streaming data solutions, automating workflows with CI/CD tools like Jenkins and Azure DevOps, and ensuring data governance with Delta Lake
  • Spark, PySpark, Delta Lake, Azure DevOps, Python.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture Azure CI/CD Data analysis Data Analytics Databricks Data governance Data pipelines DevOps Engineering Git Jenkins Pipelines Predictive modeling PySpark Python Spark SQL Streaming

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this