DevOps Engineer

United Kingdom / Hybrid

Actica Consulting

A Business and Technical Consultanc

View all jobs at Actica Consulting

Apply now Apply later

DevOps Engineer

Department: Consultancy

Employment Type: Full Time

Location: United Kingdom / Hybrid


Description


As a DevOps Engineer at Actica, you will play a crucial role in modernising and maintaining mission-critical infrastructure for high-profile UK public sector organisations. You'll be responsible for implementing and managing CI/CD pipelines, cloud infrastructure, and automated deployment processes, ensuring the reliable delivery of services that impact people's everyday lives.

Locations: London, Guildford, Bristol, M4 corridor, Hybrid

Roles & Responsibilities

Working in secure, classified environments, you're involved in designing and delivering scalable, secure, and automated solutions that enable the rapid, reliable delivery of mission-critical systems. Supporting innovative projects involving multi-cloud platforms, zero-trust security frameworks, and robust monitoring solutions, ensuring the reliability of nationally significant services.
Our deep expertise in public sector digital transformation and established presence across defence and government organisations offer unique opportunities to work on high-impact projects while maintaining the highest security standards. If you're passionate about modern DevOps practices and want to apply your skills to transform government infrastructure while working with cutting-edge technologies in secure environments, we should discuss how your expertise could strengthen our growing practice.
As a DevOps Engineer at Actica, you will be responsible for designing, implementing, and maintaining cloud-native infrastructure and deployment pipelines across AWS, Azure, and GCP platforms, with a focus on supporting data engineering workloads and ensuring secure, automated operations for UK public sector organisations. Working with infrastructure as code, containerisation, and modern CI/CD practices while collaborating with our data engineering teams to optimise infrastructure for their specific needs.
Cloud Platform and Infrastructure Engineering:
  • Design, implement, and maintain multi-cloud infrastructure using Infrastructure as Code (IaC)
  • Build and maintain cloud-native data platforms using AWS services (EMR, Redshift, Glue, Lake Formation)
  • Implement Azure data solutions (Synapse Analytics, Data Factory, Databricks)
  • Deploy and manage GCP data services (BigQuery, Dataflow, Dataproc)
  • Manage and optimize Kubernetes clusters for data workloads
  • Implement and maintain service mesh architectures
  • Design and implement scalable microservices and data processing architectures
  • Create and maintain disaster recovery and backup solutions for data platforms
  • Implement and manage monitoring and observability solutions across cloud platforms
  • Ensure high availability and performance of production systems and data pipelines
CI/CD and Automation:
  • Design and implement CI/CD pipelines
  • Automate build, test, and deployment processes
  • Implement automated security scanning and compliance checks
  • Develop and maintain infrastructure automation scripts
  • Create and maintain documentation for automated processes
  • Implement GitOps practices and workflows
  • Manage source control and branching strategies 
Security and Compliance:
  • Implement security best practices and compliance requirements
  • Manage access control and identity management
  • Configure and maintain security monitoring tools
  • Implement secure networking configurations
  • Ensure compliance with government security standards
  • Manage security incident response procedures
  • Conduct security audits and assessments
Project Responsibilities:
  • Tackle diverse assignments spanning modern cloud and data infrastructure, from modernising legacy systems into cloud-native architectures to building and maintaining robust data platforms across multiple cloud providers. 
  • Implement secure data processing environments for classified information, create automated deployment pipelines for data applications and ML models, and design comprehensive multi-cloud solutions with robust monitoring systems. 
  • Lead technical transformation initiatives, working closely with data engineering teams to support their infrastructure needs, implement DataOps practices, and collaborate with data scientists to optimise infrastructure for their workloads, ensuring smooth operations and high performance across all platforms. 

Skills, Knowledge and Expertise

Technical Skills Required:
  • Strong experience with cloud platforms and their data services: 
    • AWS: EMR, Redshift, Glue, Lake Formation, ECS, EKS
    • Azure: Synapse Analytics, Data Factory, Databricks, AKS, Container Apps
    • GCP: BigQuery, Dataflow, Dataproc, GKE, Cloud Run
  • Expertise in Infrastructure as Code (Terraform, CloudFormation, Azure ARM)
  • Experience with data pipeline orchestration tools (Airflow, AWS Step Functions, Azure Data Factory)
  • Proficiency in containerization (Docker) and orchestration (Kubernetes) for data workloads
  • Experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions) for data applications
  • Strong scripting skills (Python, Bash, PowerShell) for automation and data processing
  • Knowledge of monitoring tools (Prometheus, Grafana, ELK Stack) for infrastructure and data pipelines
  • Experience with configuration management (Ansible, Chef, Puppet)
Additional Technical Skills Desired:
  • Knowledge of security tools and practices
  • Familiarity with GitOps workflows
  • Understanding of networking principles and SDN
  • Knowledge of database administration
Additional Requirements:
  • Must be eligible and willing to obtain UK Government Security Clearance.
Key Attributes for Success:
  • Ability to engage effectively with stakeholders, including resolving issues and identifying new opportunities.
  • Strong interpersonal and influencing skills.
  • Adaptability to a fast-paced, ever-changing environment.
Working Arrangements:
  • Hybrid working model, with an office base in Guildford, Surrey and access to our other offices in London, Swindon and Cheltenham.
  • Typical working week might involve 2-3 days working at clients’ premises or other locations and the remainder at home or at one of our offices.
  • Some projects may require up to 5 days per week on-site with colleagues.
  • The practicalities of some project work means that individuals may need to stay away from home during the working week

Career Development

A Mentor will be on hand to provide support and guidance throughout your journey with Actica. You will also work with a Performance and Development Manager, often outside of your project line of control, who will conduct regular reviews based on project feedback to set career objectives and identify training courses which are both relevant to your current project work, and aligned with your planned career progression.

Our Commitment to Diversity
 
Actica aims to nurture a diverse workforce through inclusive working practices, promoting equality in our recruitment activities, and by employing candidates on the basis of merit. Discrimination against individuals on the grounds of protected characteristics is not permitted and we take steps to ensure that our staff are made aware of their legal responsibilities when making hiring decisions.

We offer a competitive suite of benefits.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Ansible Architecture AWS Azure BigQuery CI/CD CloudFormation Databricks Dataflow DataOps Data pipelines Dataproc DevOps Docker ECS ELK Engineering GCP GitHub GitLab Grafana Jenkins Kubernetes Lake Formation Machine Learning Microservices ML models Pipelines Puppet Python Redshift Security Step Functions Terraform

Perks/benefits: Career development Home office stipend

Region: Europe
Country: United Kingdom

More jobs like this