Data Engineer III - GBS IND

Hyderabad, India

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Bank of America

What would you like the power to do? At Bank of America, our purpose is to help make financial lives better through the power of every connection.

View all jobs at Bank of America

Apply now Apply later

Job Description:

About Us

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection.  Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day.

One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being.

Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization.

Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us!

Global Business Services

Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations.

Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation.

In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services.

Process Overview*

The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative.

Job Description*

We're seeking a highly skilled AI/ML Platform Engineer to architect and build a modern, scalable, and secure Data Science and Analytical Platform. This pivotal role will drive end-to-end (E2E) model lifecycle management, establish robust platform governance, and create the foundational infrastructure for developing, deploying, and managing Machine Learning models across both on-premise and hybrid cloud environments.

Responsibilities*

  • Lead the architecture and design for building scalable, resilient, and secure distributed applications ensuring compliance with organizational technology guidelines, security standards, and industry best practices like 12-factor principles and well-architected framework guidelines.
  • Actively contribute to hands-on coding, building core components, APIs and microservices while ensuring high code quality, maintainability, and performance.
  • Ensure adherence to engineering excellence standards and compliance with key organizational metrics such as code quality, test coverage and defect rates.
  • Integrate secure development practices, including data encryption, secure authentication, and vulnerability management into the application lifecycle.
  • Work on adopting and aligning development practices with CI/CD best practices to enable efficient build and deployment of the application on the target platforms like VMs and/or Container orchestration platforms like Kubernetes, OpenShift etc.
  • Collaborate with stakeholders to align technical solutions business requirements, driving informed decision-making and effective communication across teams.
  • Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes.
  • Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams.
  • Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment.
  • Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams.
  • Analyze, understand, execute and resolve the issues in user scripts / model / code.
  • Perform release and upgrade activities as required.
  • Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space.
  • Ability to fire fight, propose fix, guide the team towards day-to-day issues in production.
  • Ability to train partner Data Science teams on frameworks and platform.
  • Flexible with time and shift to support the project requirements. It doesn’t include any night shift.
  • This position doesn’t include any L1 or L2 (first line of support) responsibility.

Requirements*

Education*

  • Graduation / Post Graduation:  BE/B.Tech/MCA/MTech

Certifications If Any: FullStack Bigdata

Experience Range*

  • 11+ Years

Foundational Skills*

  • Microservices & API Development: Strong proficiency in Python, building performant microservices and REST APIs using frameworks like FastAPI and Flask.
  • API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar, e.g., Kong, Envoy) for managing and securing API traffic, including JWT/OAuth2 based authentication.
  • Observability & Monitoring: Proven ability to monitor, log, and troubleshoot model APIs and platform services using tools such as Prometheus, Grafana, or the ELK/EFK stack.
  • Policy & Governance: Proficiency with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing governance policies.
  • MLOps Expertise: Solid understanding of MLOps capabilities, including ML model versioning, registry, and lifecycle automation using tools like MLflow, Kubeflow, or custom metadata solutions.
  • Multi-Tenancy: Experience designing and implementing multi-tenant architectures for shared model and data infrastructure.
  • Containerization & Orchestration: Strong knowledge of Docker and Kubernetes for containerization and orchestration.
  • CI/CD & GitOps: Familiarity with CI/CD tools and GitOps practices for automated deployments and infrastructure management.
  • Hybrid Cloud Deployments: Understanding of hybrid deployment strategies across on-premise virtual machines and public cloud platforms (AWS, Azure, GCP).
  • Data science workbench understanding: Basic understanding of the requirements for data science workloads (Distributed training frameworks like Apache Spark, Dash, and IDE’s like Jupyter notebooks abd VScode)

Desired Skills*

  • Security Architecture: Understanding of zero-trust security architecture and secure API design patterns.
  • Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server.
  • Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores.
  • Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata
  • Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements.
  • Utilizes multiple architectural components (across data, application, business) in design and development of client requirements.
  • Performs Continuous Integration and Continuous Development (CI-CD) activities.
  • Contributes to story refinement and definition of requirements.
  • Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Extensive hands on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment)
  • Experience with model deployment, scoring and monitoring for batch and real-time on various different technologies and platforms.
  • Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration.
  • Experience in automation for deployment using Ansible Playbooks, scripting.
  • Experience with developing and building RESTful API services in an efficient and scalable manner.
  • Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs)
  • Experience designing and building full stack solutions utilizing distributed computing or multi-node architecture for large datasets (terabytes to petabyte scale)
  • Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training
  • Hands on experience working in a Cloud Platform (AWS/Azure/GCP) to support the Data Science
  • Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment.

Work Timings*

  • 11:30 AM to 8:30 PM IST

Job Location*

Hyderabad

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Ansible API Development APIs Architecture AWS Azure Banking CI/CD Data Analytics Data pipelines Data strategy Docker ELK Engineering ETL FastAPI Flask GCP Grafana Hadoop Jupyter Kafka Kubeflow Kubernetes Machine Learning Microservices MLFlow ML models MLOps Model deployment Open Source Pipelines Python Security Spark Streaming

Perks/benefits: Career development Flex hours Startup environment Team events

Region: Asia/Pacific
Country: India

More jobs like this