Senior Consultant - Tech Consulting - Big data Engineer- Pune
Mumbai, MH, IN, 400028
EY
Mit unseren vier integrierten Geschäftsbereichen — Wirtschaftsprüfung und prüfungsnahe Dienstleistungen, Steuerberatung, Unternehmensberatung und Strategy and Transactions — sowie unserem Branchenwissen unterstützen wir unsere Mandanten dabei,...Requisition Id : 1569465
Job Description -
In this role, you will: (Principal Responsibilities)
As a key member of the technical team alongside Engineers, Data Scientists and Data Users, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:
- Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
- Promoting development standards, code reviews, mentoring, knowledge sharing
- Production support & troubleshooting.
- Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
- Liaison with BAs to ensure that requirements are correctly interpreted and implemented.
- Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives. Input into system architecture and design.
- Peer code reviews.
To be successful in this role, you should meet the following requirements: (Must have Requirements)
- Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+.
- Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
- Sound knowledge on working Unix/Linux Platform
- Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.
- Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
- Understanding of big data modelling techniques using relational and non-relational techniques
- Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;
The successful candidate will also meet the following requirements: (Good to have Requirements)
- Experience with time-series/analytics dB’s such as Elastic search.
- Experience with scheduling tools such as Airflow, Control-M.
- Understanding or experience of Cloud design patterns
- Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.
- Experience with developing Hive QL, UDF’s for analysing semi structured/structured datasets.
Location - Pune
Notice Period - 0-30 Days
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow Ansible Architecture Big Data Consulting Data pipelines DevOps ETL Git GitHub Hadoop Java Jenkins Jira Kafka Kanban Linux Map Reduce Pipelines Python Scala Scrum Spark SQL Streaming Testing
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.