Pyspark Developer

Pune City, India

Applications have closed

DATAECONOMY

Enabling Businesses to Monetize Data at Data Speeds with cutting edge Technology Services and Solutions. Big Data Management, Cloud enablement, Data Science, etc..

View all jobs at DATAECONOMY

Join DataEconomy and be part of a dynamic team driving data-driven solutions. We're seeking highly skilled PySpark developers with 4-6 years of experience to join our team in Hyderabad or Pune.

Responsibilities:

  • Design and implement robust metadata-driven data ingestion pipelines using PySpark.
  • Collaborate with technical teams to develop innovative data solutions.
  • Work closely with business stakeholders to understand and translate requirements into technical specifications.
  • Conduct unit testing, system testing, and support during UAT.
  • Demonstrate strong analytical and problem-solving skills, as well as a commitment to excellence in software development.
  • Experience in the financial or banking domain is a plus.





Requirements

Requirements

  • 4-8 years of experience in IT, with a minimum of 4 years of hands-on experience in Python and PySpark.
  • Solid understanding of data warehousing concepts and ETL processes.
  • Proficiency in Linux and Java is a plus.
  • Experience with code versioning tools like Git, AWS CodeCommit, and CI/CD pipelines (e.g., AWS CodePipeline).
  • Proven ability to build metadata-driven frameworks for data ingestion.
  • Familiarity with various designs and architectural patterns.


  • Benefits


    Benefits

  • Opportunities for professional growth and development.
  • Be part of a dynamic and collaborative team.


  • * Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

    Job stats:  0  0  0
    Category: Engineering Jobs

    Tags: AWS Banking CI/CD Data Warehousing ETL Git Java Linux Pipelines PySpark Python Testing

    Perks/benefits: Career development

    Region: Asia/Pacific
    Country: India

    More jobs like this