DataScience Tools admin (L09)

Hyderabad IN, India

Synchrony

Find great deals, promotional offers, credit cards, savings products, payment solutions, and more. See how Synchrony can help you today!

View all jobs at Synchrony

Apply now Apply later

Job Description:

Role Title:  DataScience Tools admin (L09)

Company Overview:

Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more.

  • We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies.

  • Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members.

  • We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being.

  • We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.

Organizational Overview:

This role will be part of the Data Architecture & Analytics group part of CTO organization.

  • Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL).

  • Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions.

  • Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency.

  • Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts

Role Summary/Purpose:

We are looking for an strong Individual contributor, Data Science Platform Administrator who will work in building and managing the Data Science platform for ensuring its smooth functionality, performance, security, and compliance with IT policies, by managing user access, performing maintenance updates, monitoring system health, and troubleshooting technical issues related to data storage, processing, and analysis tools within the platform; often requiring strong technical skills in Python, cloud computing, and data science tools like Anaconda and H2O. 

Key Responsibilities:

  • Manage Anaconda Enterprise, H2O and Hadoop Platforms including security, migration and upgrades

  • Manage end-to-end Platform & Infrastructure request with minimal direction from Functional Managers.

  • Knowledge and experience to implement and operate data science infrastructure assets (e.g., Jupiter Notebooks, Python, R libraries, Spark, Livy, Anaconda, H2O,  Driverless AI, Hive, and other Hadoop eco-system components)

  • Knowledge and experience to manage Kubernetes , Dockers and Devops

  • Participates in capacity planning and performance testing to ensure Data Science Platform environments are adequately sized and configured to meet current and projected demand.

  • Oversee administration guidelines for server upgrades, backups, patching, performance tuning and security and administration of the application.

  • Manage the users, groups, and knowledge on integration with Active Directory and OKTA.

  • Understand and propose improvements to underlying data models and infrastructure.

  • Clearly communicate solutions to both technical and non-technical teams.

  • Develop a set of best practices and share across user groups.

  • Stay ahead of new data science capabilities and deliver internal training as needed to internal functional user groups.

        

Desired Skills/Knowledge:

  • Good to have understanding and knowledge on Data Science tools like Anaconda Enterprise and H2O

  • Good to have knowledge on Pyspark, Data lake and Hadoop components ( HDFS, Yarn, Hive, Spark, Livy )

  • Good to have experience with Okta, IAM and Active Directory

  • Good to have understanding on Kubernetes and Dockers

  • Good to have experience in Agile Methodologies

  • Good to have knowledge on Splunk and New Relic tools

Eligibility Criteria:

  • Bachelor's Degree with a minimum of 2+ years of Information Technology experience or in a lieu of a degree 4+ years of Information Technology experience.

  • Minimum 2+ years of experience with Data Science tools such as Anaconda, H2O, Driverless AI and Hadoop.

  • Minimum 2+ years of experience with Kubernetes and Dockers

  • Minimum 2+ years of hands on experience on maintaining, optimizing, resolving issues of Hadoop clusters, supporting business users, EDL batch processes, and real time services.

  • Minimum 2+ years of work experience in Agile Methodologies

Work Timings: 3 PM to 12 AM IST

(This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details.)

For Internal Applicants:

  • Understand the criteria or mandatory skills required for the role, before applying

  • Inform your manager and HRM before applying for any role on Workday

  • Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format)

  • Must not be any corrective action plan (First Formal/Final Formal, PIP)

  • L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible.

  • L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible.

  • L04+ Employees can apply

Grade/Level: 09

Job Family Group:

Information Technology

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: Agile Anaconda Architecture Data pipelines Data quality DevOps ETL Hadoop HDFS Kubernetes Pipelines PySpark Python R Security Spark Splunk Testing

Perks/benefits: Flex hours Health care Wellness

Region: Asia/Pacific
Country: India

More jobs like this