IN Senior Associate Data Engineer(PySpark,Python) Data & Analytics Advisory Bangalore

Bengaluru Millenia

Applications have closed

PwC

We are a community of solvers combining human ingenuity, experience and technology innovation to help organisations build trust and deliver sustained outcomes.

View all jobs at PwC

Line of Service

Advisory

Industry/Sector

Not Applicable

Specialism

Data, Analytics & AI

Management Level

Senior Associate

Job Description & Summary

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.

Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.

*Why PWCAt PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. "

Responsibilities:

  • Deliver projects integrating data flows within and across technology systems.
  • Lead data modeling sessions with end user groups, project stakeholders, and technology teams to produce logical and physical data models.
  • Design end-to-end job flows that span across systems, including quality checks and controls.
  • Create technology delivery plans to implement system changes.
  • Perform data analysis, data profiling, and data sourcing in relational and Big Data environments.
  • Convert functional requirements into logical and physical data models.
  • Assist in ETL development, testing, and troubleshooting ETL issues.
  • Troubleshoot data issues and work with data providers for resolution; provide L3 support when needed.
  • Design and develop ETL workflows using modern coding and testing standards.
  • Participate in agile ceremonies and actively drive towards team goals.
  • Collaborate with a global team of technologists.
  • Lead with ideas and innovation.
  • Manage communication and partner with end users to design solutions.

Required Skills:

Must have: Total experience required 4-10 years (relevant experience minimum 5 years)

  • 5 years of project experience in Python/Shell scripting in Data Engineering (experience in building and optimizing data pipelines, architectures, and data sets with large data volumes).
  • 3+ years of experience in PySpark scripting, including the architecture framework of Spark.
  • 3-5 years of strong experience in database development (Snowflake/ SQL Server/Oracle/Sybase/DB2) in designing schema, complex procedures, complex data scripts, query authoring (SQL), and performance optimization.
  • Strong understanding of Unix environment and batch scripting languages (Shell/Python).
  • Strong knowledge of Big Data/Hadoop platform.
  • Strong engineering skills with the ability to understand existing system designs and enhance or migrate them.
  • Strong logical data modeling skills within the Financial Services domain.
  • Experience in data integration and data conversions.
  • Strong collaboration and communication skills.
  • Strong organizational and planning skills.
  • Strong analytical, profiling, and troubleshooting skills.

Good to Have:

  • Experience with ETL tools (e.g Informatica, Azure Data Factory) and pipelines across disparate sources is a plus.
  • Experience working with Databricks is a plus.
  • Familiarity with standard Agile & DevOps methodology & tools (Jenkins, Sonar, Jira).
  • Good understanding of developing ETL processes using Informatica or other ETL tools.
  • Experience working with Source Code Management solutions (e.g., Git).
  • Knowledge of Investment Management Business.
  • Experience with job scheduling tools (e.g., Autosys).
  • Experience with data visualization software (e.g., Tableau).
  • Experience with data modeling tools (e.g., Power Designer).
  • Basic familiarity with using metadata stores to maintain a repository of Critical Data Elements. (e.g. Collibra)
  • Familiarity with XML or other markup languages.

Mandatory skill sets:

ETL,Python/Shell scripting , building pipelines,pyspark, database, sql

Preferred skill sets:

informatica, hadoop, databricks, collibra

Years of experience required:

4 to 10 years

Education qualification:

Graduate Engineer or Management Graduate

Education (if blank, degree and/or field of study not specified)

Degrees/Field of Study required: Bachelor in Business Administration, Bachelor of Commerce, Bachelor of Engineering

Degrees/Field of Study preferred:

Certifications (if blank, certifications not specified)

Required Skills

Data Engineering, Python (Programming Language)

Optional Skills

Java

Desired Languages (If blank, desired languages not specified)

Travel Requirements

Available for Work Visa Sponsorship?

Government Clearance Required?

Job Posting End Date

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Agile Architecture Azure Big Data Business Intelligence Data analysis Data Analytics Databricks Data management Data pipelines Data visualization DB2 DevOps Engineering ETL Git Hadoop Informatica Java Jenkins Jira Oracle Pipelines PySpark Python Shell scripting Snowflake Spark SQL Tableau Testing XML

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this