Senior PySpark Developer (Data Engineering)

Pune - Hinjewadi (Ascendas), India

Synechron

Synechron is an innovative global consulting firm delivering industry-leading digital solutions to transform and empower businesses.

View all jobs at Synechron

Apply now Apply later

Software Requirements:

  • Proficiency in PySpark and Spark development.
  • Experience with Unix and HDFS (Hadoop Distributed File System).
  • Familiarity with libraries such as PyArrow.
  • Working knowledge of SQL and databases.

Overall Responsibilities:

  • Develop, optimize, and maintain data processing workflows using PySpark.
  • Collaborate with cross-functional teams to understand data requirements and provide data engineering solutions.
  • Implement and support data pipelines for large-scale data processing.
  • Ensure the reliability, efficiency, and performance of data processing systems.
  • Participate in code reviews and ensure adherence to best practices and standards.

Technical Skills:

Category-wise:

PySpark and Spark Development:

  • Proficiency in PySpark and Spark for data processing.
  • Experience with Spark modules and optimization techniques.

Unix and HDFS:

  • Basic experience with Unix operating systems.
  • Knowledge of Hadoop Distributed File System (HDFS).

Libraries and Tools:

  • Familiarity with PyArrow and other related libraries.

Databases and SQL:

  • Working knowledge of SQL and database management.
  • Ability to write complex SQL queries for data extraction and manipulation.

Experience:

  • Minimum of 6+ years of professional experience in data engineering.
  • Proven experience with PySpark and Spark development, particularly for data processing.
  • Experience working with Unix, HDFS, and associated libraries.
  • Demonstrated experience with SQL and database management.

Day-to-Day Activities:

  • Develop and maintain data processing workflows using PySpark.
  • Optimize Spark jobs for performance and efficiency.
  • Collaborate with data engineers, data scientists, and other stakeholders to understand data requirements.
  • Deploy and monitor data pipelines in a Hadoop and Spark environment.
  • Perform data extraction, transformation, and loading (ETL) tasks.
  • Troubleshoot and resolve issues related to data processing and pipeline failures.
  • Participate in code reviews and contribute to the improvement of coding standards and practices.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Relevant certifications in data engineering or big data technologies are a plus.

Soft Skills:

  • Excellent communication and interpersonal skills.
  • Strong problem-solving and analytical skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Attention to detail and a commitment to delivering high-quality work.
  • Ability to prioritize tasks and manage time effectively.
  • Collaborative and team-oriented mindset.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Big Data Computer Science Data pipelines Engineering ETL Hadoop HDFS Pipelines PySpark Spark SQL

Perks/benefits: Flex hours

Region: Asia/Pacific
Country: India

More jobs like this