Senior Process Manager
Mumbai, Maharashtra, India
eClerx
eClerx is a global leader in productized services, enhancing business outcomes through technology, Artificial Intelligence, and deep domain expertise.The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives.
Senior Process Manager Roles and responsibilities:
We are seeking a talented and motivated Data Engineer to join our dynamic team. The ideal candidate will have a deep understanding of data integration processes and experience in developing and managing data pipelines using Python, SQL, and PySpark within Databricks. You will be responsible for designing robust backend solutions, implementing CI/CD processes, and ensuring data quality and consistency.
- Data Pipeline Development:
- Using Data bricks features to explore raw datasets and understand their structure.
- Creating and optimizing Spark-based workflows.
- Create end-to-end data processing pipelines, including ingesting raw data, transforming it, and running analyses on the processed data.
- Create and maintain data pipelines using Python and SQL.
- Solution Design and Architecture:
- Design and architect backend solutions for data integration, ensuring they are robust, scalable, and aligned with business requirements.
- Implement data processing pipelines using various technologies, including cloud platforms, big data tools, and streaming frameworks.
- Automation and Scheduling:
- Automate data integration processes and schedule jobs on servers to ensure seamless data flow.
- Data Quality and Monitoring:
- Develop and implement data quality checks and monitoring systems to ensure data accuracy and consistency.
- CI/CD Implementation:
- Use Jenkins and Bit bucket to create and maintain metadata and job files.
- Implement continuous integration and continuous deployment (CI/CD) processes in both development and production environments to deploy data pipelines efficiently.
- Collaboration and Documentation:
- Work effectively with cross-functional teams, including software engineers, data scientists, and DevOps, to ensure successful project delivery.
- Document data pipelines and architecture to ensure knowledge transfer and maintainability.
- Participate in stakeholder interviews, workshops, and design reviews to define data models, pipelines, and workflows.
Technical and Functional Skills:
- Education and Experience:
- Bachelor’s Degree with 7+ years of experience, including at least 3+ years of hands-on experience in SQL/ and Python.
- Technical Proficiency:
- Proficiency in writing and optimizing SQL queries in MySQL and SQL Server.
- Expertise in Python for writing reusable components and enhancing existing ETL scripts.
- Solid understanding of ETL concepts and data pipeline architecture, including CDC, incremental loads, and slowly changing dimensions (SCDs).
- Hands-on experience with PySpark.
- Knowledge and experience with using Data bricks will be a bonus.
- Familiarity with data warehousing solutions and ETL processes.
- Understanding of data architecture and backend solution design.
- Cloud and CI/CD Experience:
- Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Familiarity with Jenkins and Bit bucket for CI/CD processes.
- Additional Skills:
- Ability to work independently and manage multiple projects simultaneously.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Azure Big Data CI/CD Databricks Data pipelines Data quality Data Warehousing DevOps ETL GCP Google Cloud Jenkins MySQL Pipelines PySpark Python Spark SQL Streaming
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.