Azure Databricks Data Engineer| 6 to 8 Years| Gyansys | Hybrid
Bangalore, Karnataka, India
PradeepIT
PradeepIT, supported by Asia's largest tech professional network, revolutionizing global talent acquisition. Discover the potential of hiring top Asian tech talents at ten times the speed, starting today!Job Description: Data Engineer (Azure Databricks & PySpark)
Position: Data Engineer
Experience: 6 to 8 years
Primary Skills: Azure Databricks, PySpark, SQL (M)
Secondary Skills: ADF (Azure Data Factory) (M)
Project Exposure: Cloud migration (M)
Location: Bengaluru/Hyderabad
Mode of Work: Hybrid
Salary : 13 lac to 17 lac
Notice Period: 15 to 30 Days (M)
Databricks Engineer Job Description
Responsibilities:
- Work as part of a global distribution team to design and implement Hadoop big data solutions in alignment with business needs and project schedules.
- 5+ years of data warehousing/engineering, software solutions design, and development experience.
- Code, test, and document new or modified data systems to create robust and scalable applications for data analytics.
- Work with other Big Data developers to make sure that all data solutions are consistent.
- Partner with the business community to understand requirements, determine training needs, and deliver user training sessions.
- Perform technology and product research to better define requirements, resolve important issues, and improve the overall capability of the analytics technology stack.
- Evaluate and provide feedback on future technologies and new releases/upgrades.
- Support Big Data and batch/real-time analytical solutions leveraging transformational technologies.
- Work on multiple projects as a technical team member or drive:
- User requirement analysis and elaboration
- Design and development of software applications
- Testing and build automation tools
- Research and incubate new technologies and frameworks.
- Experience with agile or other rapid application development methodologies and tools like Bitbucket, Jira, and Confluence.
- Have built solutions with public cloud providers such as AWS, Azure, or GCP.
Expertise Required:
- Hands-on experience in Databricks stack
- Data Engineering technologies (Ex: Spark, Hadoop, Kafka, etc.)
- Proficiency in Streaming technologies
- Hands-on experience in Python and SQL
- Expertise in implementing Data Warehousing solutions
- Expertise in any ETL tool (e.g., SSIS, Redwood)
- Good understanding of submitting jobs using Workflows, API & CLI
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs AWS Azure Big Data Bitbucket Confluence Data Analytics Databricks Data Warehousing Engineering ETL GCP Hadoop Jira Kafka PySpark Python Research Spark SQL SSIS Streaming Testing
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.