Secure Managed Amazon Web Services (AWS)

Datapipe delivers managed services for Amazon Web Services (AWS), providing public sector organisations with on-demand, scalable public cloud compute, storage and network resources to deliver transformational projects and digital solutions. Datapipe is a Premier AWS partner, offering the optimal blend of enterprise-grade, production service delivery and deep public sector experience.


  • Amazon Web Services (AWS) with enterprise-grade Datapipe service management
  • Highly secure managed public cloud environments customised to your requirements
  • AWS services available across 2 secure UK availability zones
  • AWS services managed and delivered by UK based accredited staff
  • Suitable for OFFICIAL workloads
  • Complex workload delivery supported by dynamic auto-scaling
  • Custom onboarding services to get your organisation ‘run-ready’ included
  • Expert workload migration services including engineering, provisioning & configuration management
  • AWS Enterprise Support included in the managed service fee
  • Vast ecosystem of AWS services to access as you develop


  • AWS managed on your terms, integrated into your organisation
  • Straightforward scale that directly aligns usage to expenditure
  • Build and configure instances within minutes via secure self-service portal
  • Rapid infrastructure provisioning and deployment for new projects and ventures
  • Quickly on-board and off-board instances as your requirements change
  • Deliver appropriate security levels on a per application/ workload basis
  • Integrate with other cloud & physical environments rapidly & securely
  • Immediate access to AWS Certified and Accredited professionals
  • Migrate away from legacy IT environments for specific applications/ workloads
  • Insight into AWS spend, single-vendor billing and enhanced account governance


£0.01 per instance per hour

  • Free trial available

Service documents

G-Cloud 9



George Earp

07788 721 069

Service scope

Service scope
Service constraints Management up to and including the OS is mandatory. Management up to the hypervisor only is not permitted.
Not all AWS Services are available in all regions.
Customers must prove compliance with the access requirements of private networks.
System requirements
  • Typically Windows or Linux (various) virtual machines
  • Standard connectivity via site-to-site IPSec VPN or the internet

User support

User support
Email or online ticketing support Email or online ticketing
Support response times Severity One incidents are responded to within 10minutes of the incident being logged, 24 hours a day, 7 days a week. Incidents are logged either by phone, email or the automated monitoring of infrastructure and applications.

Full details of the service response targets for incidents, changes and requests can be found in the terms and conditions.

The manged service includes AWS Enterprise Support, therefore customers receive this escalation path and the associated benefits.
User can manage status and priority of support tickets Yes
Online ticketing support accessibility None or don’t know
Phone support Yes
Phone support availability 24 hours, 7 days a week
Web chat support No
Onsite support Yes, at extra cost
Support levels Datapipe's support model is all-inclusive and untiered. We offer the same level of service to every Dataipe customer. Our core customer engagement principle is to be ‘Easy to Work With’. This culture is most visible in our Operations Centre, where specialist teams work closely together with a shared understanding of our customer’s drivers and their required outcomes.

This is achieved by the following alignment structure:
> Account Team (Lead): Our Planners and Thinkers
• This team is responsible for understanding and communicating the required customer outcomes to the rest of the Datapipe business and is accountable for maintaining the partnership between the customer and Datapipe.
> Service: Our Deliverers and Analysts
• This team is responsible for managing the delivery of customer outcomes that have been set during the discovery, analysis and design phases. The service team are responsible for ensuring the customer's sevice experience meets expectations throughout live service.
> Operations: Our Engineers and Explorers
• This team is responsible for maintaining and accelerating the delivery of our customer outcomes through deep technical specialisms combined with a thorough understanding of the customer's business.
Support available to third parties Yes

Onboarding and offboarding

Onboarding and offboarding
Getting started Getting Started: personal support from your assigned Service Delivery Manager, full user documentation and end user portal enrolment.

Datapipe has years of experience on-boarding customers into our virtual and cloud infrastructure environments. We will walk you through all considerations (typically including network connectivity and migration options) as your requirements develop, ensuring we balance risk vs cost vs timescales in the right way for your organisation.

Datapipe’s proven, expert service management delivers a single point of contact for your teams. Our Service Delivery Managers (SDM) are responsible for the successful onboarding and running of your services and create custom engagement schedules for review and discussion. Your SDM will also collaborate with you to create a custom runbook, which clearly lays out all information, contacts and processes relating to the daily management of your environments.

Your SDM will also provide one-on-one training to ensure a high level of comfort and familiarity with our interfaces and portals. This can be achieved over a webex for large distributed groups of end users or at your premises, depending on your preference.
Service documentation Yes
Documentation formats PDF
End-of-contract data extraction Users can extract their data across the network via VPN or other secure network protocol or via Direct Connect if the customer has this in place. . Snapshots of virtual machine images can be provided if required which can then be transferred across a secure link.

In the event you require a live migration of virtual machines or database data, replication services may be configured, subject to analysis by Datapipe, which may incur additional costs.

Design and service documentation is located on the Datapipe portal and can be downloaded to provide a permanent record. Other documentation, where available or feasible to produce, can be provided on request.

Depending on the your target end state and specific schedule, there may be additional professional services charges applicable to help ensure that the migration and cutover of services to the new provider are aligned precisely with requirements.
End-of-contract process If you feel the need to switch providers, we will work with you to expedite the off-boarding of your AWS services or to remove the Datapipe managed service from your running environment. Datapipe’s solutions are all based on standardised infrastructure and software, with robust migration processes and consistent documentation that make knowledge transfer straightforward and complete.

As standard, if you wish to move workloads, Datapipe will provide secure access to third parties to extract your data and application configurations to help you get applications up and running in the target environment. If you want to keep the workloads running, but require the Datapipe managed service to be terminated, the tools and software can be removed, leaving the running workloads.

Depending on your target end state and specific schedule, there may be additional professional services charges applicable to help ensure that the migration and cutover of services to the new provider are aligned precisely to your requirements.

Using the service

Using the service
Web browser interface Yes
Using the web interface Users can create and manage incidents, changes and requests through the Datapipe portal.

Customer documentation is stored on the portal, allowing customers to view service reports, design documentation and invoices.

Customers can create and remove users of the portal for their organisation and adjust the type of user account they have.

The following is also available through the portal
View current monitoring configuration per server
• Submit and/or view open/closed incidents, changes, and tickets
• View device information by individual server or by application group, including uptime, CPU, memory and virtual memory and storage
• Review the latest backup status
• Submit and/ or view escalation, alerts and notifications
• Update contact information
• Utilise as a repository of all assets
• Monitor, filter, and view events and event history for devices
• Historical record of events, incidents, tickets and inventory
• Run custom reporting on performance statistics and workflow management
• Basic self service and resource utilisation analytics are available
Web interface accessibility standard None or don’t know
How the web interface is accessible The web interface is accessible through a variety of browsers and is built using HMTL standards. All standard operations and input methods are supported. Data is presented in a meaningful sequence and we avoid conventions like colour coding to ensure we are not limiting the experience of the visually impaired. Web pages do not have timing limits and page titling is straightforward making the site easier to navigate.
Web interface accessibility testing No specific web interface technology testing has been undertaken with assistive technology users, however good practice development methods have been used to optimise the end user experience.
What users can and can't do using the API All AWS functionality is available through the underlying AWS API.
API automation tools
  • Ansible
  • Chef
  • SaltStack
  • Terraform
  • Puppet
API documentation Yes
API documentation formats
  • HTML
  • PDF
  • Other
Command line interface Yes
Command line interface compatibility
  • Linux or Unix
  • Windows
  • MacOS
  • Other
Using the command line interface Full access to AWS functionality available through the AWS CLI.


Scaling available Yes
Scaling type
  • Automatic
  • Manual
Independence of resources Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them.

AWS services which provide virtualized operational environments to customers (i.e. EC2) ensure that customers are segregated via security management processes/controls at the network and hypervisor level.

AWS continuously monitors its the usage of its services, which underpin the Datapipe service, to project infrastructure needs to support availability commitments/requirements.
Usage notifications Yes
Usage reporting
  • API
  • Email
  • SMS
  • Other


Infrastructure or application metrics Yes
Metrics types
  • CPU
  • Disk
  • HTTP request and response status
  • Memory
  • Network
  • Number of active instances
  • Other
Other metrics
  • Database connections
  • Database memory
  • Standard Service monitoring (started/stopped)
  • Standard process monitoring
  • Custom infrastructure metrics (where feasible)
  • Custom application metrics (where feasible)
  • Standard AWS Cloudwatch metrics
Reporting types
  • API access
  • Real-time dashboards
  • Regular reports
  • Reports on request


Supplier type Reseller providing extra features and support
Organisation whose services are being resold Amazon Web Services

Staff security

Staff security
Staff security clearance Conforms to BS7858:2012
Government security clearance Up to Developed Vetting (DV)

Asset protection

Asset protection
Knowledge of data storage and processing locations Yes
Data storage and processing locations
  • United Kingdom
  • European Economic Area (EEA)
  • EU-US Privacy Shield agreement locations
  • Other locations
User control over data storage and processing locations Yes
Datacentre security standards Managed by a third party
Penetration testing frequency At least once a year
Penetration testing approach ‘IT Health Check’ performed by a Tigerscheme qualified provider or a CREST-approved service provider
Protecting data at rest Other
Other data at rest protection approach The AWS service adheres to independently validated privacy, data protection, security protections and control processes. (Listed under “certifications”).

AWS is responsible for the security of the cloud; customers are responsible for security in the cloud. AWS enables customers to control their content (where it will be stored, how it will be secured in transit or at rest, how access to their AWS environment will be managed).

Wherever appropriate, AWS offers customers options to add additional security layers to data at rest, via scalable and efficient encryption features. AWS offers flexible key management options and dedicated hardware-based cryptographic key storage.
Data sanitisation process Yes
Data sanitisation type
  • Explicit overwriting of storage before reallocation
  • Deleted data can’t be directly accessed
  • Hardware containing data is completely destroyed
Equipment disposal approach A third-party destruction service

Backup and recovery

Backup and recovery
Backup and recovery Yes
What’s backed up
  • AWS Instance (Virtual Machine) Images
  • AWS EBS Disk Snapshots
  • Database Snapshots (when using AWS RDS)
  • Infrastructure as Code templates and configuration
Backup controls Backup schedules and types are agreed with the customer at the point of contract, documented and implemented as part of the onboarding process. If the customer requirements change, a ticket can be logged to amend the schedule. The appropriate customer documentation will also be updated.

Backup success is reported on a regular basis in the Service Reports provided to the customer. Any backup failures are retried the next day and failure records are reported to the customer.
Datacentre setup Multiple datacentres with disaster recovery
Scheduling backups Users contact the support team to schedule backups
Backup recovery
  • Users can recover backups themselves, for example through a web interface
  • Users contact the support team

Data-in-transit protection

Data-in-transit protection
Data protection between buyer and supplier networks
  • Private network or public sector network
  • TLS (version 1.2 or above)
  • IPsec or TLS VPN gateway
  • Other
Other protection between networks Within the underpinning AWS service network devices, including firewall and other boundary devices, are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network. These boundary devices employ rule sets, access control lists (ACL), and configurations to enforce the flow of information to specific information system services.

ACLs, or traffic flow policies, are established on each managed interface, which manage and enforce the flow of traffic. ACL policies are approved by Amazon Information Security.
Data protection within supplier network Other
Other protection within supplier network Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them. AWS provides customers ownership and control over their content by design through simple, but powerful tools that allow customers to determine how their content will be secured in transit.
AWS enables customers to open a secure, encrypted channel to AWS services using TLS/SSL, and/or IPsec or TLS VPN (if applicable), or other means of protection the customer wish to use.
API calls can be encrypted with TLS/SSL to maintain confidentiality; the AWS Console connection is encrypted with TLS.

Availability and resilience

Availability and resilience
Guaranteed availability Datapipe provide SLA's backed off to those that AWS currently provides for several services. Due to the rapidly evolving nature of AWS’s product offerings, AWS SLAs are best reviewed directly on the AWS website via the links below:

• Amazon EC2 SLA:
• Amazon S3 SLA:
• Amazon CloudFront SLA:
• Amazon Route 53 SLA:
• Amazon RDS SLA:
• AWS Shield Advanced SLA:

Well-architected solutions on AWS that leverage AWS Service SLA’s and unique AWS capabilities such as multiple Availability Zones, can ease the burden of achieving specific SLA requirements.

Our service credit mechanism is governed by our responsiveness to incident management as per the terms and conditions. Service credits are applied as a percentage of the monthly fee.
Approach to resilience Details available on request.
Outage reporting The Datapipe portal details scheduled maintenance, outages and incidents affecting multiple customers and relates to the managed service.

In the event of an incident, nominated contacts for each customer, as documented in the operational run book, are notified and updated at least every 60 minutes of the progress towards resolution of the issue.

Technical Escalation Managers (TEM) ensure that Service Levels are maintained around incidents, change requests and service requests, while also ensuring that customer notifications and interactions are
consistent with the customer’s Solution Escalation Action Plan (SEAP). Datapipe’s internal processes are built on ITIL-based methodology.

Technical Escalation Managers are also deployed onto customer incidents depending on severity, who take ownership of resolution outcomes and provide a central point of contact for all comms.

AWS provide service status pages on their portals and provide outage notifications of the core AWS services. Public dashboard; personalised dashboard with API and events; configurable alerting (email / SMS / messaging) can be configured as required.

Identity and authentication

Identity and authentication
User authentication
  • 2-factor authentication
  • Identity federation with existing provider (for example Google apps)
  • Dedicated link (for example VPN)
  • Username or password
Access restrictions in management interfaces and support channels Access is limited via a secure two -factor authentication method, using 'least privilege' access to systems. Customers can log tickets via email or telephone and all initial interactions are security validated against a list of known email addresses, persons, telephone numbers and security information. Datapipe performs all management through Secure Management Environments (SME). This is a walled garden approach to customer identity management. An engineer must first provide a username & FIPS 104-2 compliant one time password (OTP) combination, then valid active directory password associated with the users lowest level account. All customers can use their own authentication source.
Access restriction testing frequency At least once a year
Management access authentication
  • 2-factor authentication
  • Dedicated link (for example VPN)
  • Username or password
Devices users manage the service through
  • Dedicated device on a segregated network (providers own provision)
  • Dedicated device over multiple services or networks
  • Any device but through a bastion host (a bastion host is a server that provides access to a private network from an external network such as the internet)

Audit information for users

Audit information for users
Access to user activity audit information Users contact the support team to get audit information
How long user audit data is stored for At least 12 months
Access to supplier activity audit information Users contact the support team to get audit information
How long supplier audit data is stored for At least 12 months
How long system logs are stored for At least 12 months

Standards and certifications

Standards and certifications
ISO/IEC 27001 certification Yes
Who accredited the ISO/IEC 27001 SNR Certification, Certification No.: SNR 11399498/15/I
ISO/IEC 27001 accreditation date 20 October 2016, Renew Date: 05 October 2018
What the ISO/IEC 27001 doesn’t cover Anything above the Hypervisor is not covered by the Datapipe ISMS. Datapipe use a shared security model to ensure all parties are aware of their responsibilities and agree how to manage risk.
ISO 28000:2007 certification No
CSA STAR certification No
PCI certification Yes
Who accredited the PCI DSS certification NTT Security Ltd, Certificate ID: o4Anq6RuYfK2dN1
PCI DSS accreditation date 15 September 2016, Renew Date: 15 September 2017
What the PCI DSS doesn’t cover As per industry best practice, our PCI scope is restricted to specific platforms. Any platform that is not in the Datapipe PCI scope is not covered by this certification. For Platforms in scope anything above the Hypervisor is not covered by the Datapipe PCI scope. Datapipe uses a shared security model to ensure all parties are aware of the scope of accreditations, their responsibilities and agree how to manage risk.
Other security accreditations Yes
Any other security accreditations
  • Solution Architect Professional and Associate
  • PSN Code of Connection
  • Cyber Essentials Plus

Security governance

Security governance
Named board-level person responsible for service security Yes
Security governance accreditation Yes
Security governance standards
  • ISO/IEC 27001
  • Other
Other security governance standards PCI DSS
Information security policies and processes In order to protect both ourselves and our customers, we have invested in maintaining core security certifications for ISO 9001, ISO 27001, Cyber Essentials and PCI DSS 3.2. The Datapipe Executive Team are committed to providing a robust framework that prioritises security across our business. The board have recognised Information Security and Cyber Security are vital to the protection of any organisation’s key assets and supporting the global digital economy. Security risks, requirements and controls are primarily designed around the CIA Triad, which relates to Confidentiality, Integrity and Availability.
Managing security in this manner allows for a practical, applicable and cost effective design that meets our business, regulatory and compliance requirements. As we are fully certified in both ISO27001 and PCI we have robust compliant policies that are regularly audited by ourselves. Policy implementation is measured though metrics which are reported quarterly to the board, direction is then communicated to heads of department for rectification.

Operational security

Operational security
Configuration and change management standard Conforms to a recognised standard, for example CSA CCM v3.0 or SSAE-16 / ISAE 3402
Configuration and change management approach Datapipe follows the ITIL definition of change management to provide a standardised method for the management of the risk and impact associated with amending live configuration items. The process covers both Datapipe and customer configuration items.

Changes are categorised as Standard, Normal or Emergency allowing for appropriate due diligence to be performed.

The Change Team ensure the necessary governance is in place at all stages of the process and are responsible for managing quality, adherence to the process and provide final approval. There is a seven point process: Logging, Assessment, Scheduling, Testing and Plans, Communications, Reporting and Governance.
Vulnerability management type Conforms to a recognised standard, for example CSA CCM v3.0 or SSAE-16 / ISAE 3402
Vulnerability management approach Datapipe Security regularly carries out vulnerability scans using authorised scanning vendors on external interfaces as well as internal scans using market leading products. Results are reviewed and remediation plans set through raising tasks within our management system for engineer completion. We closely monitor multiple vendor websites and receive vendor e-mails for patch releases, vulnerability notification or vendor specific warnings. We are also signed up to NCSC CiSP. Notifications of vulnerabilities are distributed to our relevant teams teams who inform our customers. Datapipe follows standard patching timeframes of 30/60/90 days but for government customers, aims for critical patches within 14 days.
Protective monitoring type Conforms to a recognised standard, for example CSA CCM v3.0 or SSAE-16 / ISAE 3402
Protective monitoring approach Datapipe utilises market leading unified security management tools for our protective monitoring solution on our platforms. These combine five essential security capabilities: Asset Discovery, Behavioural Monitoring, Vulnerability Assessment, SIEM and Intrusion Detection into a single management plane. Datapipe, through the software, has a complete view of our estate ensuring the complete integrity of our platform by identifying potentially compromised systems and suspicious behaviour, assessing vulnerabilities, correlating and analysing security event data.
Incident management type Conforms to a recognised standard, for example, CSA CCM v3.0 or ISO/IEC 27035:2011 or SSAE-16 / ISAE 3402
Incident management approach Where Datapipe has not acknowledged an issue through proactive monitoring, users can report incidents by phone or email, 24x7, to the service desk.

Datapipe follows the ITIL definition of Major Incident prioritisation:
Sev 1 Critical - Single Client Total Outage.
Sev 2 Major - Single Client Impairment.
The Major Incident Management Process is implemented by the Datapipe Operations team with the goal of managing unplanned service interruptions. This includes customer communications (by phone and email) to a defined schedule. The Operations group, specifically the Technical Escalation Manager (TEM) is responsible for initiating and managing the incident reporting process.

Secure development

Secure development
Approach to secure software development best practice Independent review of processes (for example CESG CPA Build Standard, ISO/IEC 27034, ISO/IEC 27001 or CSA CCM v3.0)

Separation between users

Separation between users
Virtualisation technology used to keep applications and users sharing the same infrastructure apart Yes
Who implements virtualisation Supplier
Virtualisation technologies used Other
Other virtualisation technology used AWS proprietary
How shared infrastructure is kept separate Customer environments are logically segregated, preventing users and customers from accessing unassigned resources. Customers maintain full control over their data access. Services which provide virtualized operational environments to customers ensure that each customer is segregated and prevent cross-tenant privilege escalation and information disclosure via hypervisors and instance isolation.

Different instances running on the same physical machine are isolated from each other via the Xen hypervisor. The Amazon EC2 firewall resides within the hypervisor layer, between the physical network interface and the instance's virtual interface. All packets pass through this layer. The physical random-access memory (RAM) is separated using similar mechanisms.

Energy efficiency

Energy efficiency
Energy-efficient datacentres Yes


Price £0.01 per instance per hour
Discount for educational organisations No
Free trial available Yes
Description of free trial Datapipe will work with organisations to create custom PoC based on mutually agreed criteria and AWS funding.

Typically this would be an MVP to 'prove before you use' which we would limit to 2-4 weeks with clear scope.

Full resilience.
Production applications/ workloads.
Large scale data migrations.
Limited network.


Pricing document View uploaded document
Skills Framework for the Information Age rate card View uploaded document
Service definition document View uploaded document
Terms and conditions document View uploaded document
Return to top ↑