PySpark Developer (Cloudera Data Platform and ETL Pipelines)

Chennai, India

Synechron

Synechron is an innovative global consulting firm delivering industry-leading digital solutions to transform and empower businesses.

View all jobs at Synechron

Apply now Apply later

Software Requirements:

  • Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques
  • Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase
  • Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala)
  • Familiarity with big data technologies such as Hadoop and Kafka
  • Experience with orchestration and scheduling frameworks like Apache Oozie or Airflow
  • Strong scripting skills in Linux

Overall Responsibilities:

  • Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform
  • Implement and manage data ingestion processes from various sources (e.g., relational databases, APIs, file systems)
  • Use PySpark to process, cleanse, and transform large datasets into meaningful formats
  • Conduct performance tuning of PySpark code and Cloudera components
  • Implement data quality checks, monitoring, and validation routines
  • Automate data workflows using tools like Apache Oozie or Airflow
  • Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on CDP
  • Collaborate with other data engineers, analysts, product managers, and stakeholders to understand data requirements
  • Maintain thorough documentation of data engineering processes, code, and pipeline configurations

Technical Skills:

PySpark Development:

  • Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques

Cloudera Data Platform:

  • Strong experience with CDP components, including Cloudera Manager, Hive, Impala, HDFS, and HBase

Data Warehousing:

  • Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala)

Big Data Technologies:

  • Familiarity with Hadoop, Kafka, and other distributed computing tools

Orchestration and Scheduling:

  • Experience with Apache Oozie, Airflow, or similar orchestration frameworks

Scripting and Automation:

  • Strong scripting skills in Linux

Experience:

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field
  • 5+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform

Day-to-Day Activities:

  • Develop and maintain ETL pipelines using PySpark on the Cloudera Data Platform
  • Implement data ingestion processes from various sources
  • Process, cleanse, and transform large datasets using PySpark
  • Optimize PySpark code and Cloudera components for performance
  • Perform data quality checks, monitoring, and validation routines
  • Automate data workflows using orchestration tools
  • Monitor pipeline performance and troubleshoot issues
  • Collaborate with team members to understand data requirements and support data-driven initiatives
  • Maintain documentation of data engineering processes and configurations

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field

Soft Skills:

  • Strong analytical and problem-solving skills
  • Excellent verbal and written communication abilities
  • Ability to work independently and collaboratively in a team environment
  • Attention to detail and commitment to data quality

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow APIs Big Data Computer Science Data quality Data Warehousing Engineering ETL Hadoop HBase HDFS Kafka Linux Oozie Pipelines PySpark RDBMS SQL

Perks/benefits: Equity / stock options Flex hours

Region: Asia/Pacific
Country: India

More jobs like this