Cloud, DevOps, & Data Platform Engineer
London, England, United Kingdom - Remote
uMed
Building consented patient cohorts that accelerate research and generate insights to improve outcomes for patients.About us
uMed is a cutting-edge healthtech and data platform revolutionising clinical research. uMed combines RWE with the power of patient-generated data to address the evidence gaps in life science research.
By leveraging uMed’s ACCESS Research Platform, embedded across a global network of healthcare institutions, researchers can rapidly access and engage with patients to generate insights derived from the decentralized collection of electronic health records, clinical outcomes, patient-reported data, and biosamples.
Who we are looking for
We are looking for an experienced and skilled Cloud, DevOps, & Data Platform Engineer to join our dynamic team who can take ownership of our cloud environment, manage database infrastructure and access, and contribute to backend engineering efforts, when needed.
This is a hybrid role designed for someone who thrives in a startup environment, enjoys working across infrastructure and data, and is comfortable jumping into hands-on backend work when needed. In this role, you will be responsible for managing cloud operations, platform resilience, cost efficiency, and enabling data workflows. You’ll also handle AWS administration, access management, and help maintain a secure and compliant infrastructure (e.g., ISO 27001).
This role reports to the CPTO with dotted line reporting to the Head of Engineering.
Data Platform Enablement (Part of This Role)
In addition to core infrastructure responsibilities, this role includes supporting our data science and analytics workflows. You’ll help deploy and manage tools like JupyterHub, provision secure access to cloud-based data storage (e.g., S3, RDS, Redshift), and collaborate with data stakeholders to ensure they have a smooth, reliable environment for analysis and experimentation. While not a full-time data platform engineer, we’re looking for someone excited to bridge infrastructure and data, and enable high-impact data-driven work across the organisation.
Responsibilities:
Cloud Infrastructure & DevOps
- Use AWS Copilot CLI to deploy and manage containerized applications (ECS/Fargate).
- Configure and maintain environments, networking (VPC), IAM roles, and load balancers via Copilot-generated infrastructure.
- Monitor application and infrastructure health via CloudWatch, CloudTrail, and DataDog.
- Participate in incident response, vulnerability reporting, and platform hardening efforts.
- Support cost optimization by tracking usage trends, identifying underutilized resources, and removing unused infrastructure.
- Define and maintain disaster recovery procedures and help improve system resilience and recoverability
Database Administration
- Administer cloud-hosted databases on AWS.
- Manage user access provisioning, role management, credential rotation, and backup/restore.
- Monitor database performance and optimize queries or schema as needed.
- Enforce audit logging, encryption, and ISO 27001-aligned security practices.
TPP Hub
- Install and manage Java applications (OpenAS2 is a plus).
- Troubleshoot issues on remote computers.
Data Platform Support
- Deploy and maintain JupyterHub or similar environments for data scientists and analysts.
- Provision secure access to S3 buckets, RDS, Redshift, or other analytical infrastructure.
- Support and troubleshoot infrastructure-related issues that affect data workflows.
Backend Development (Optional)
- Support development of backend services using Python and Django.
- Collaborate with the engineering team on new features, bug fixes, and technical design.
- Contribute to testing, CI/CD pipeline maintenance, and performance tuning.
Requirements
- 3+ years of experience designing, deploying, and maintaining AWS infrastructure, with a focus on automation and Infrastructure as Code.
- Experience with AWS Copilot CLI or equivalent container deployment tooling.
- Hands-on with EC2, RDS, S3, Lambda, ECS/Fargate, and IAM/Identity Center.
- Familiarity with CloudFormation for managing or reviewing AWS stack templates.
- Experience using Ansible for automation and configuration management.
- Proficiency with Docker for containerizing applications and services.
- Experience with CI/CD pipelines, ideally using GitHub Actions, GitLab CI/CD, or CircleCI.
- Familiarity with monitoring/logging using DataDog, CloudWatch, and alerting best practices.
- Experience managing access, roles, and credentials in alignment with security best practices.
- Familiarity with ISO 27001 or similar security compliance frameworks.
- Comfortable working in Ubuntu/Linux environments.
- Proficient in Python, especially for scripting, automation, or backend development. Experience with Django is a plus.
- Comfortable writing and debugging Bash scripts for operational tasks.
- Solid understanding of VPC networking, routing, subnets, and firewall/security group configurations.
- Desktop/IT support experience, especially troubleshooting issues on remote computers. This is required for managing the TPP Hub.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Ansible AWS CI/CD CloudFormation Copilot DevOps Django Docker EC2 ECS Engineering GitHub GitLab ISO 27001 Java Lambda Linux Pipelines Python Redshift Research Security Testing
Perks/benefits: Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.