AVP, Principal Data Engineer (L11)

Remote Central Region IN, India

Synchrony

Find great deals, promotional offers, credit cards, savings products, payment solutions, and more. See how Synchrony can help you today!

View all jobs at Synchrony

Apply now Apply later

Job Description:

Role Title: AVP, Principal Data Engineer (L11)

Company Overview:

Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more.

  • We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies.

  • Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members.

  • We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being.

  • We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.

Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization.

  • Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL).

  • Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions.

  • Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency.

  • Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts

Role Summary/Purpose:

We are looking for a Principal Data Engineer to be part of Agile scrum teams and perform functional & system development for Data warehouse and Data Lake applications supporting key business domains.

This role will be instrumental in transforming the legacy systems to modern data platforms. This role is an exciting, fast-paced, constantly changing, and challenging work environment, and will play an important role in resolving and influencing high-level decisions across Synchrony.

Key Responsibilities:

  • Design and  develop and Implement ETL/ELT on a Data warehouse applications using Ab Initio, data lake platform(Cloudera Hadoop cluster)/ Container using Ab Initio, Spark, Hive, Kafka, RDBMS(Oracle, MySQL), NoSQL databases(Cassandra) and public cloud solutions.

  • Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment.

  • Provide data analysis for data ingestion, standardization and curation efforts ensuring all data is understood from a business context

  • Work with team members to achieve business results in a fast paced and quickly changing environment

  • Experience with batch and real-time data pipelines in a DevOps environment using Ab Initio and Spark.

  • Work closely with Product owners, Product Managers, Program manager, scrum masters in a Scaled Agile framework.

  • Partner with architects to efficiently design data applications with scalability, resiliency and speed.

  • Profile data to assist with defining the data elements, propose business term mappings, and define data quality rules

  • Work with the Data Office to ensure that data dictionaries for all ingested and created data sets are properly documented in Collibra and any other data dictionary repository

  • Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories

  • Assist with the creation and implementation of data quality rules

  • Ensure the proper identification of sensitive data elements and critical data elements

  • Create source-to-target data mapping documents

  • Test current processes and identify deficiencies

  • Investigate program quality to make improvements to achieve better data accuracy.

  • Apply technical knowledge, industry experience, expertise, and insights to contribute to the development & execution of Engineering capabilities.

  • Stays up-to-date on latest trends in data engineering, recommends best practices, develops innovative frameworks to avoid redundancy by promoting automation.

Required Skills/Knowledge:

  • Bachelor's degree in Computer Science or similar technical field of study and a minimum of 6+ years of work experience or in lieu of a degree 8+ years of work experience.

  • Minimum of 6+ years of experience in managing large scale data platforms (Data warehouse/Data Lake/Cloud) environments.

  • Minimum of 6+ years’ of programming experience in ETL tools - Ab Initio or Informatica and Data Lake Technologies - Hadoop, Spark, HDFS, Hive, Kafka.

  • Hands on experience with ETL tools - Ab Initio or Informatica and data lake technologies - Hadoop, Hive, Spark, Kafka.

  • Working knowledge of cloud platforms such as S3, Redshift, Snowflake, etc.

  • Familiar with scheduling tools like Stonebranch.

  • Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution.

  • Hands on experience in writing shell scripts and complex SQL queries.

  • Familiar with data management tools (i.e. Collibra)

  • Proficient with databases such as MySQL, Oracle, Teradata.

Desired Skills/Knowledge:

  • Demonstrated ability to work effectively in an agile team environment.

  • Experience with batch and real-time data pipelines in a DevOps environment.

  • Must be willing to work in a fast-paced environment with globally located Agile teams working in different shifts.

  • Ability to develop and maintain strong collaborative relationships at all levels across IT and Business Stakeholders.

  • Excellent written and oral communication skills. Adept and presenting complex topics, influencing, and executing with timely /actionable follow-through.

  • Experience in designing ETL pipelines to enable automated data load into AWS S3 & Redshift.

  • Prior work experience in a Credit Card/Banking/Fin Tech company.

  • Experience dealing with sensitive data in a highly regulated environment.

  • Demonstrated implementation of complex and innovative solutions.

  • Nice to have AWS Solution Architect/Data Engineer certification.

Eligibility Criteria:

  • Bachelor's degree in Computer Science or similar technical field of study and a minimum of 6+ years of work experience or in lieu of a degree 8+ years of work experience.

  • Minimum of 6+ years of experience in managing large scale data platforms (Data warehouse/Data Lake/Cloud) environments.

  • Minimum of 6+ years’ of programming experience in ETL tools - Ab Initio or Informatica and Data Lake Technologies - Hadoop, Spark, HDFS, Hive, Kafka.

Work Timings: 3PM to 12 AM IST

(WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details.)

For Internal Applicants:

  • Understand the criteria or mandatory skills required for the role, before applying

  • Inform your manager and HRM before applying for any role on Workday

  • Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format)

  • Must not be any corrective action plan (First Formal/Final Formal, PIP)

  • L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible.

  • L09+ Employees can apply

Grade/Level: 11

Job Family Group:

Information Technology

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture AWS Banking Cassandra Computer Science Data analysis Data governance Data management Data pipelines Data quality Data warehouse DevOps ELT Engineering ETL Hadoop HDFS Informatica Kafka MySQL NoSQL Oracle Pipelines RDBMS Redshift Scrum Snowflake Spark SQL Teradata Testing

Perks/benefits: Career development Flex hours Health care Wellness

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this