Principal Engineer (Big Data)

Remote, India

Nagarro

A digital product engineering leader, Nagarro drives technology-led business breakthroughs for industry leaders and challengers through agility and innovation.

View all jobs at Nagarro

Apply now Apply later

Company Description

👋🏼We're Nagarro.

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in!

Job Description

REQUIREMENTS:

  • Experience: 13+ Years
  • Proven experience in designing and implementing complex data solutions aligned with business objectives.
  • Expertise in Azure Data Bricks Architecture, data modelling, integration, security, and governance
  • Hands-on experience with guiding the virtual data model definition, defining Data Virtualisation architecture and deployment with focus on Azure, Databricks, PySpark technologies.
  • Prior experience with establishing best practices for business optimisations.
  • Experience with relational and non-relational data stores (Hadoop, SQL, Mongo DB), ETL or ELT tools (SSIS, Informatica, Matillion, DBT), DevOps, Data Lake and Data Fabric concepts
  • In-depth experience with data governance, data integration and related technologies.
  • Proficiency in a variety of database technologies, both relational and non-relational.
  • Knowledge of cloud-based data solutions (e.g. Azure).
  • Design, develop, test, deploy, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks.
  • Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs.
  • Develop complex SQL queries to optimize database performance and troubleshoot issues in Hadoop ecosystem components such as Hive.
  • Implement real-time event-driven architecture using Kafka for streaming data ingestion and integration with ADF.
  • Ensure high availability of the system by implementing monitoring tools like Prometheus/Grafana/Zabbix.

RESPONSIBILITIES:

  • Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements.
  • Mapping decisions with requirements and be able to translate the same to developers.
  • Identifying different solutions and being able to narrow down the best option that meets the client’s requirements.
  • Defining guidelines and benchmarks for NFR considerations during project implementation
  • Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers
  • Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed.
  • Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it
  • Understanding and relating technology integration scenarios and applying these learnings in projects
  • Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken.
  • Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Qualifications

Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Architecture Azure Big Data Computer Science Databricks Data governance Data pipelines dbt DevOps ELT Engineering ETL Grafana Hadoop Informatica Kafka Matillion Pipelines PySpark Security SQL SSIS Streaming

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this