Senior Big Data/Spark Engineer

Gera Commerzone SEZ, Pune, India

Barclays

Barclays is a British universal bank. Our businesses include consumer banking, as well as a top-tier, global corporate and investment bank.

View all jobs at Barclays

Apply now Apply later

Job Description

Purpose of the role

To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. 

Accountabilities

  • Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data.

  • Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures.

  • Development of processing and analysis algorithms fit for the intended data complexity and volumes.

  • Collaboration with data scientist to build and deploy machine learning models.

Analyst Expectations

  • To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement.

  • Requires in-depth technical knowledge and experience in their assigned area of expertise

  • Thorough understanding of the underlying principles and concepts within the area of expertise

  • They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources.

  • If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.

  • OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate.

  • Will have an impact on the work of related teams within the area.

  • Partner with other functions and business areas.

  • Takes responsibility for end results of a team’s operational processing and activities.

  • Escalate breaches of policies / procedure appropriately.

  • Take responsibility for embedding new policies/ procedures adopted due to risk mitigation.

  • Advise and influence decision making within own area of expertise.

  • Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct.

  • Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function.

  • Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function.

  • Make evaluative judgements based on the analysis of factual information, paying attention to detail.

  • Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents.

  • Guide and persuade team members and communicate complex / sensitive information.

  • Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation.

All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Join us as a Senior Big Data/Spark Engineer at Barclays, where you will be responsible for supporting the successful delivery of location strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences.

To be successful as a Senior Big Data/Spark Engineer you should have experience with:   

  • Advanced Big Data & Apache Spark Expertise: Demonstrated experience developing, optimizing, and troubleshooting data processing applications using Apache Spark. Proficiency in writing efficient SQL queries and implementing data transformation pipelines at scale. Must be able to analyze performance bottlenecks and implement optimization strategies.

  • Scala Programming Proficiency: Strong command of Scala with emphasis on functional programming paradigms. Experience implementing type-safe, immutable, and composable code patterns for data processing applications. Ability to leverage Scala's advanced features for building robust Spark applications.

  • Cloud & DevOps Competency: Hands-on experience with AWS data services including S3, EC2, EMR, Glue, and related technologies. Proficient with modern software engineering practices including version control (Git), CI/CD pipelines, infrastructure as code, and automated testing frameworks.

  • Problem-Solving & Analytical Skills: Exceptional ability to diagnose complex issues, perform root cause analysis, and implement effective solutions. Experience with performance tuning, data quality validation, and systematic debugging of distributed data applications.

Some other highly valued skills may include:

  • Quantexa Certification: Certified experience with Quantexa platform and its data analytics capabilities will be highly regarded.

  • Front-End Development Experience: Familiarity with Node.js and modern JavaScript frameworks for developing data visualization interfaces or dashboards.

  • Communication Excellence: Exceptional verbal and written communication skills with the ability to translate technical concepts for diverse audiences. Experience collaborating with business stakeholders, product teams, and technical specialists.

  • Data Architecture Knowledge: Understanding of data modeling principles, schema design, and data governance practices in distributed environments.

  • Containerization & Orchestration: Experience with Docker, Kubernetes, or similar technologies for deploying and managing data applications.

You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills.

This role is based out of Pune.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Architecture AWS Big Data CI/CD Data Analytics Data governance Data pipelines Data quality Data visualization DevOps Docker EC2 Engineering Git JavaScript Kubernetes Machine Learning ML models Node.js Pipelines Scala Security Spark SQL Testing

Perks/benefits: Career development Team events

Region: Asia/Pacific
Country: India

More jobs like this