Big Data Engineer with Databricks

Remote, Romania

Nagarro

A digital product engineering leader, Nagarro drives technology-led business breakthroughs for industry leaders and challengers through agility and innovation.

View all jobs at Nagarro

Apply now Apply later

Company Description

👋🏼 We're Nagarro.

We are a digital product engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17 500+ experts across 37 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in!

By this point in your career, it is not just about the tech you know or how well you can code. It is about what more you want to do with that knowledge. Can you help your teammates proceed in the right direction? Can you tackle the challenges our clients face while always looking to take our solutions one step further to succeed at an even higher level? Yes? You may be ready to join us.

Job Description

  • Develop a comprehensive technical plan for the migration, including data ingestion, transformation, storage, and access control within Azure’s Data Factory and data lake.
  • Design and implement scalable and efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks.
  • Develop scalable and reusable frameworks for data ingestion.
  • Ensure data quality and integrity throughout the data pipeline, implementing robust data validation and cleansing mechanisms.
  • Work with event-based and streaming technologies to ingest and process data.
  • Provide technical guidance and support to the team, resolving technical challenges or issues during the migration and post-migration phases.
  • Stay current with advancements in cloud computing, data engineering, and analytics technologies, recommending best practices and industry standards for implementing data lake solutions.

Qualifications

  • 5 to 7 years of IT experience.
  • Minimum of 2 years of experience working with Azure Databricks.
  • Expertise in Data Modeling and Source System Analysis.
  • Proficiency with PySpark.
  • Mastery of SQL.
  • Knowledge of Azure components: Azure Data Factory, Azure Data Lake, Azure SQL DW, and Azure SQL.
  • Experience with Python programming for data engineering purposes.
  • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.
  • Experience with data visualization and exploration tools.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  1  0

Tags: Azure Big Data Databricks Data pipelines Data quality Data visualization Engineering Pipelines PySpark Python SQL Streaming

Region: Remote/Anywhere

More jobs like this