Associate Manager, Scientific Data Cloud Engineering

IND - Telangana - Hyderabad (HITEC City), India

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

MSD

At MSD, we're following the science to tackle some of the world's greatest health threats. Get a glimpse of how we work to improve lives.

View all jobs at MSD

Apply now Apply later

Job Description

Associate Manager, Scientific Data Cloud Engineering

The Opportunity

  • Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare.
  • Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products.
  • Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats.

Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy.

A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres.

Role Overview

Our Engineering team builds core components used by our Research Labs data analytics, visualization, and management workflows. The analysis tools and pipelines built for data processing by our team in partnership with our scientists aim to accelerate research and the discovery of new therapies for our patients.

We collect, annotate, analyze petabytes of scientific data (multi-omics, chemistry, imaging, safety) used in biomarker research, drug safety/efficacy, drug target discovery, and compendium diagnostic development. We help our scientists to process, analyze scientific data at scale by developing highly parallelized analytical workflows ran on HPC infrastructure (on-prem & cloud); to manage, explore and visualize various scientific data modalities by developing bespoke data models, bioinformatics ETL processes, data retrieval and visualization services using distributed micro-service architecture, FAIR data principles, SPA type dashboards, industry specific regulatory compliant data integrity, auditing, and security access controls.

We are a creative and disciplined software engineering team, using agile practices and established technology stacks to design and develop large-scale data analytics, visualization, and management software solutions for local and on-cloud hosted HPC datacenters, as well as to integrate 3rd party analytical platforms with internal data workflows to address pressing engineering and data science challenges of life-science.

We are looking for Software Engineers (SE) who can break down and solve complex problems with a strong motivation to get things done with a boots-on-the-ground, pragmatic mindset! Our engineers own their products end to end and influence the way how our products and technology are deployed to facilitate most aspects of drug discovery impacting hundreds of thousands of patients around the world. We are looking for engineers who can creatively handle complex dependencies and ambiguous requirements, competing business priorities, while producing fit-for-purpose, optimal solutions.

We are hoping that you are passionate about collaborating across the interface between hard-core software development and research and discovery data analysis.

What will you do in this role

  • Design and implement engineering tools, applications and solutions that facilitate research processes and scientific discovery in several areas of our drug discovery process.
  • Help drive the design and architecture of adopted engineering solutions with a detail-oriented mindset
  • Promote and help with the adoption of development, design, architecture, and DevOps best practices, with a particular focus on agile deliver mindset
  • Lead and mentor smaller team of developers (squads) to ensure timely and quality delivery of multiple product iterations
  • Drive product discovery and requirements clarification for ambiguous and/or undefined problems framed with uncertainty.
  • Manage technical and business dependencies and bottlenecks; balance technical constraints with business requirements; and deliver maximum business impact with solid customer experience
  • Help stakeholders with go/no-go decisions on software and infrastructure by assessing gaps in existing software solutions (internal/external), by vetting technologies/platforms and vendor products
  • Strong collaboration, organization skills in cross-functional teams; ability to effectively communicate with technical and non-technical audiences; work closely with scientists, peers, and business leaders in different geographical locations to define and deliver complex engineering features.

What should you have

Education:

  • BS or MS in Computer Science/Bioinformatics

Basic Qualifications

  • MUST (Proficient (2+ years hands-on experience)
  • w/ at least one language: Java (preferred), Python or C#
  • with building CI/CD workflows with Jenkins or equivalent
  • with using IaC frameworks (CloudFormation, Ansible, Terraform)
  • to build microservice-architecture solutions with a focus on scientific data analysis and management
  • to integrate AWS services (EC2/RDS/S3/Batch/KMS/ECS etc..) into production workflows

SHOULD-HAVE (proven hands-on experience, at least 3 years)

  • with these scripting languages: Python, Bash
  • to build production workflows using Java/Python/
  • with Linux OS command line
  • to drive API (REST, GraphQL, etc.) driven, modular development of production workflows and integration with 3rd party vendor platforms
  • relational data models, ETL processing pipelines using PostgreSQL (preferred), Oracle, SQL Server, MySQL
  • Proficient with pipelines-workflow: building, execution, maintenance, debugging
  •  Airflow, Nextflow, aws batch

SOFT-SKILS MUST

  • Strong collaborator, communicator,
  • Strong problem-solving skills
  • Experienced with making technical partnerships with research and business teams
  • NICE TO HAVE: Prior experience with
  • with non-relational database vendors (Elastic Search, etc..)
  • with building resource intensive HPC analysis modules and/or data processing tasks
  • with developing and deploying containerized applications (i.e., Docker, Singularity)
  • with containerization platform: Kubernetes/Helm
  • with end-to-end testing framework: Robot/Selenium

What we look for

Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today.

#HYDIT2025

Current Employees apply HERE

Current Contingent Workers apply HERE

Search Firm Representatives Please Read Carefully 
Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company.  No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. 

Employee Status:

Regular

Relocation:

VISA Sponsorship:

Travel Requirements:

Flexible Work Arrangements:

Hybrid

Shift:

Valid Driving License:

Hazardous Material(s):


Required Skills:

Availability Management, Capacity Management, Change Controls, Design Applications, High Performance Computing (HPC), Incident Management, Information Management, Information Technology (IT) Infrastructure, IT Service Management (ITSM), Release Management, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Administration, System Designs


Preferred Skills:

Job Posting End Date:

08/15/2025

*A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: Agile Airflow Ansible APIs Architecture AWS Bioinformatics Chemistry CI/CD CloudFormation Computer Science CX Data analysis Data Analytics DevOps Docker Drug discovery EC2 ECS Engineering ETL GraphQL Helm HPC Java Jenkins Kubernetes Linux MySQL Oracle Pipelines PostgreSQL Python RDBMS Research SDLC Security Selenium SQL Terraform Testing

Perks/benefits: Relocation support Startup environment

Region: Asia/Pacific
Country: India

More jobs like this