Hadoop & UNIX Shell Scripting Engineer (Data Management, Automation & Performance Tuning)

Bengaluru - GTP, India

Synechron

Synechron is an innovative global consulting firm delivering industry-leading digital solutions to transform and empower businesses.

View all jobs at Synechron

Apply now Apply later

Job Summary

Synechron is seeking a dedicated and technically skilled Hadoop Shell Scripting Engineer to manage and optimize our Hadoop ecosystem. The role involves developing automation utilities, troubleshooting complex issues, and collaborating with vendors for platform enhancements. Your expertise will directly support enterprise data processing, performance tuning, and cloud migration initiatives, ensuring reliable and efficient data infrastructure that aligns with organizational goals.

Software Requirements

Required Skills:

  • Strong proficiency in UNIX Shell scripting with hands-on experience in developing automation utilities
  • In-depth understanding of Hadoop architecture and ecosystem components (HDFS, Hive, Spark)
  • Experience with SQL querying and database systems
  • Familiarity with Git and enterprise version control practices
  • Working knowledge of DevOps and CI/CD tools and processes

Preferred Skills:

  • Experience with Python scripting for automation and utility development
  • Knowledge of Java programming language
  • Familiarity with cloud platforms (AWS, Azure, GCP) related to Hadoop ecosystem support
  • Exposure to Hadoop vendor support and collaboration processes (e.g., Cloudera)

Overall Responsibilities

  • Develop, maintain, and enhance scripts and utilities to automate Hadoop cluster management and data processing tasks
  • Serve as the Level 3 point of contact for issues related to Hadoop and Spark platforms
  • Perform performance tuning and capacity planning to support enterprise data workloads
  • Conduct proof-of-concept tests for emerging technologies and evaluate their suitability for cloud migration projects
  • Collaborate with vendor support teams and internal stakeholders for issue resolution, feature requests, and platform improvements
  • Review and validate all changes going into production to ensure stability and performance
  • Continuously analyze process inefficiencies and develop new automation utilities to enhance productivity
  • Assist in capacity management and performance monitoring activities

Technical Skills (By Category)

Programming Languages:

  • Essential: UNIX Shell scripting
  • Preferred: Python, Java

Databases & Data Management:

  • Essential: Knowledge of SQL querying and database systems
  • Preferred: Experience with Hive, HDFS

Cloud Technologies:

  • Preferred: Basic familiarity with cloud platforms for Hadoop ecosystem support and migration

Frameworks & Libraries:

  • Not specifically applicable; focus on scripting and platform tools

Development Tools & Methodologies:

  • Essential: Git, version control, DevOps practices, CI/CD pipelines
  • Preferred: Automation frameworks, monitoring tools

Security Protocols:

  • Not explicitly specified but familiarity with secure scripting and data access controls is advantageous

Experience Requirements

  • Minimum of 5+ years of hands-on experience working with Hadoop clusters and scripting in UNIX shell
  • Proven experience in managing enterprise Hadoop/Spark environments
  • Experience in performance tuning, capacity planning, and utility development
  • Exposure to cloud migrations or proof-of-concept evaluations is a plus
  • Background in data engineering or platform support roles preferred

Day-to-Day Activities

  • Develop and enhance UNIX shell scripts for Hadoop automation and utility management
  • Troubleshoot and resolve complex platform issues as the Level 3 point of contact
  • Work with application teams to optimize queries and data workflows
  • Engage with vendor support teams for platform issues and feature requests
  • Perform system performance reviews, capacity assessments, and tuning activities
  • Lead initiatives for process automation, efficiency improvement, and new technology evaluations
  • Document procedures, scripts, and platform configurations
  • Participate in team meetings, provide technical feedback, and collaborate across teams on platform health

Qualifications

Educational Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • Equivalent professional experience in data engineering, platform support, or Hadoop administration

Certifications (Preferred):

  • Certificates in Hadoop ecosystem, Linux scripting, or cloud platform certifications

Training & Professional Development:

  • Ongoing learning related to big data platforms, automation, and cloud migration

Professional Competencies

  • Strong analytical and troubleshooting skills
  • Excellent written and verbal communication skills
  • Proven ability to work independently with minimal supervision
  • Collaborative team player with a positive attitude
  • Ability to prioritize tasks effectively and resolve issues swiftly
  • Adaptability to evolving technologies and environments
  • Focus on quality, security, and process improvement

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Architecture AWS Azure Big Data CI/CD Computer Science Data management DevOps Engineering GCP Git Hadoop HDFS Java Linux Pipelines Python Security Shell scripting Spark SQL

Perks/benefits: Flex hours

Region: Asia/Pacific
Country: India

More jobs like this