Senior Data Engineer

Telangana, India

Chubb

Chubb insurance products and services in Germany

View all jobs at Chubb

Apply now Apply later

Look candidates who can join quickly, within a 30-day timeframe

 

Experience 5-7 years

 

  • Design and implement end-to-end data processing pipelines using Azure Data Factory and Databricks. This involves extracting data from various sources, transforming it to meet specific business requirements, and loading it into target data repositories.
  • Integrate data from on-premises systems, third-party sources, and external APIs using Azure Data Factory. Develop efficient workflows and data transfer mechanisms to ensure high-quality, consistent, and reliable data ingestion.
  • Optimize data processing pipelines and SQL queries to improve performance and reduce latency. Utilize query optimization techniques and data partitioning strategies to accelerate data processing.
  • Implement and manage data repositories on Azure, such as Azure Data Lake Storage or Azure SQL Data Warehouse. Define and enforce data governance policies and security measures to protect sensitive data.
  • Monitor data pipelines, jobs, and workflows to identify and resolve any issues or bottlenecks. Perform regular performance tuning, error handling, and troubleshooting to ensure smooth data processing.
  • Stay updated with the latest advancements in Azure Data Factory and Databricks technologies. Continuously identify areas for improvement, automate repetitive processes, and enhance data engineering practices to drive efficiency and deliver high-quality data solutions.
  • Collaborate with solution architects to design scalable and reliable data architectures in Azure, incorporating Azure Data Factory and Databricks components. Provide guidance on data modeling, data flow, and data integration patterns.
  • Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Document data pipelines, workflows, and technical specifications for future reference and knowledge sharing.
  • Implement data governance and security best practices to ensure compliance with regulatory requirements and industry standards. Establish data access controls, data encryption, and data classification mechanisms.
  • Stay up-to-date with industry trends to leverage advanced capabilities and optimize data engineering practices.
  • Good to have knowledge in Snowflake.

Look candidates who can join quickly, within a 30-day timeframe

 

Experience 5-7 years

 

  • Design and implement end-to-end data processing pipelines using Azure Data Factory and Databricks. This involves extracting data from various sources, transforming it to meet specific business requirements, and loading it into target data repositories.
  • Integrate data from on-premises systems, third-party sources, and external APIs using Azure Data Factory. Develop efficient workflows and data transfer mechanisms to ensure high-quality, consistent, and reliable data ingestion.
  • Optimize data processing pipelines and SQL queries to improve performance and reduce latency. Utilize query optimization techniques and data partitioning strategies to accelerate data processing.
  • Implement and manage data repositories on Azure, such as Azure Data Lake Storage or Azure SQL Data Warehouse. Define and enforce data governance policies and security measures to protect sensitive data.
  • Monitor data pipelines, jobs, and workflows to identify and resolve any issues or bottlenecks. Perform regular performance tuning, error handling, and troubleshooting to ensure smooth data processing.
  • Stay updated with the latest advancements in Azure Data Factory and Databricks technologies. Continuously identify areas for improvement, automate repetitive processes, and enhance data engineering practices to drive efficiency and deliver high-quality data solutions.
  • Collaborate with solution architects to design scalable and reliable data architectures in Azure, incorporating Azure Data Factory and Databricks components. Provide guidance on data modeling, data flow, and data integration patterns.
  • Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Document data pipelines, workflows, and technical specifications for future reference and knowledge sharing.
  • Implement data governance and security best practices to ensure compliance with regulatory requirements and industry standards. Establish data access controls, data encryption, and data classification mechanisms.
  • Stay up-to-date with industry trends to leverage advanced capabilities and optimize data engineering practices.
  • Good to have knowledge in Snowflake.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: APIs Architecture Azure Classification Databricks Data governance Data pipelines Data warehouse Engineering Pipelines Security Snowflake SQL

Region: Asia/Pacific
Country: India

More jobs like this