This opportunity is closed for applications

The deadline was Thursday 17 October 2019

Professional Services for the Transaction Monitoring Platform

35 Incomplete applications

29 SME, 6 large

23 Completed applications

13 SME, 10 large

Important dates

Thursday 3 October 2019
Deadline for asking questions
Thursday 10 October 2019 at 11:59pm GMT
Closing date for applications
Thursday 17 October 2019 at 11:59pm GMT


Summary of the work
Build and maintain HMRC Transaction Monitoring and Customer Insight Platform live service. Operating a Continuous Integration-Continuous Delivery (CI-CD) capability in public cloud, currently AWS, using open source technologies.

Run and Maintain Live service high availability TxM infrastructure (Splunk, Scala Apps, Postgres, Mongo, ELK, AWS S3, Kafka, R Studio, Jupiter Notebooks).
Latest start date
Wednesday 1 January 2020
Expected contract length
2 Years
Organisation the work is for
Budget range
Our expectation is that there will be a budget of £6.3m per annum which will be allocated between up to 2 suppliers.

About the work

Why the work is being done
We are looking for up to two partners with expertise in building and operating a Continuous Integration-Continuous Delivery (CI-CD) capability in public cloud. TxM is a Platform-as-a-Service (PaaS) currently hosted in AWS (London Region) and all of HMRC's modern, customer-facing digital services.

TxM went live in February 2014 and is always evolving to meet changing user needs. The platform vision and roadmap set out the future direction, centred on continually improving TxM's usability, operability, security and simplifying the path to Production, and wider exploitation of our customer insight platform for machine learning based production features across HMRC's processes.
Problem to be solved
Maintaining and enhancing HMRC's TxM platform capability to serve up audit content to HMRC's customers, as well as services to internal staff, while ensuring an optimal user experience. Without a performant, secure and highly available TxM platform, HMRC's customers will not be able to access MDTP's constituent services or may suffer a degraded user experience or be exposed to risk.
Who the users are and what they need to do
"As a user of services on
I need HMRC to protect my personal identifiable information, and the public revenue from unauthorised access and/or attack
So that I can safely conduct my tax obligations with confidence
So that I can accurately and simply provide authentication and attribute information.

As an Officer of HMRC
I need accurate, reliable and true information about the activity of customers using our customer facing services.
So that
I can detect, prevent, and pursue unauthorised and/or criminal activity
I can assist customers with their interactions with HMRC services
I can analyse customer activity and behaviours"
Early market engagement
Bristol supplier day 12th July 2019
Suppliers may also request a copy of the presentation slides and the Q&A document that followed by emailing
Please note:
1. The Classic Services have been removed
2. MDTP is being procured as a separate DOS exercise
Any work that’s already been done
TxM went live in February 2014 and is now in its third major iteration:
2014 - 1 x SME cloud provider
2016 - 2 x SME cloud providers (active-active)
2017 - 1 x hyperscale cloud provider
The platform roadmap envisages TxM's continued evolution as a cutting edge PaaS, so there is plenty of transformative feature work in addition to the live running aspects.
Existing team
HMRC expect the Service Providers to work alongside other suppliers and internal staff, including apprentices, as part of blended teams.
There are 5 teams, typically comprising the following roles:
- infrastructure engineer
- software developer
- QA/tester
- Delivery Lead
- Product Owner
- Business Analyst
- Service Designer
- User Researcher
Current phase

Work setup

Address where the work will take place
Primary location: 10 South Colonnade London E14. This office will move in late 2019/early 2020 to Stratford East London.

Primary Location: Accounts Office, Victoria Street, Shipley, BD98 8AA. This office will move in late 2020 to 7 and 8 Wellington Place, Leeds.
Working arrangements
The supplier will be required to co-locate with the existing platform teams, mixed teams comprising both internal staff and other contractors, 5 days per week.

Occasional travel may be required to other HMRC Delivery Centres. Expenses will be paid as per agreed contract rates.

We expect the successful supplier to provide upskilling to permanent staff to increase internal capability.
Security clearance
All supplier contractors must hold or be able to achieve Security Clearance (SC). These posts are NOT reserved to UK Nationals. To be suitable for security clearance contractors MUST have been continuously resident in the UK without any significant breaks. Further details of eligibility for clearance can be found:

Additional information

Additional terms and conditions
All personnel engaged in the provision of this service must have a minimum of three years experience in their designated role/skill specialism, with an expectation of five years minimum experience for lead roles.

Additional HMRC specific terms in relation tax compliance among other things will be added to the call off contract

Skills and experience

Buyers will use the essential and nice-to-have skills and experience to help them evaluate suppliers’ technical competence.

Essential skills and experience
  • Providing additional and/or value added activities when acting in the role of a partner supplier to an organisation
  • Translating business problems and user needs into technical designs using agile methodology
  • Proficient at writing code (scala, python, ruby) to solve problems, automating wherever it adds value and incorporating security best practices at all times
  • Hands on experience of various AWS services like Cloudformation, S3, ECS, EC2, RDS, Lambda, SQS, SNS, Stacks and IAM
  • Proponents of test-driven development (TDD) practices, writing top quality unit tests and code
  • Devising and implementing unit/component, integration, system and acceptance tests to meet functional and non-functional requirements
  • Deep understanding of distributed Source Control, preferably git/GitHub
  • Deep understanding of application deployment strategies and Continuous Integration
  • Deep understanding of inter-application communication protocols
  • Deep understanding of navigating and troubleshooting Linux servers
  • Developing machine learning based environments using R studio, Jupiter Notebooks, AWS Sagemaker to develop models.
  • Deep understanding of stream based data ingestion pipelines, including a range of physical and software based collection methods.
  • Deep understanding of complex multi source audit event management including, normalisation, ontology development, summarisation, and criminal justice standard confidentiality and data integrity.
Nice-to-have skills and experience
  • Significant, similar contract with a public sector body
  • Experience of developing using Terraform, Kubernetes, ELK, Sensu, Clickhouse, MongoDB
  • Splunk cluster administration
  • Previous experience of operating sensitive protected environment at Official Sensitive or higher.

How suppliers will be evaluated

All suppliers will be asked to provide a written proposal.

How many suppliers to evaluate
Proposal criteria
  • Earliest Start date
  • Suppliers will need to demonstrate that they will fit with our approaches to software delivery.
  • Collaborative working with other suppliers and in-house teams, including coaching, mentoring and knowledge sharing
  • Proponents of test-driven development (TDD) practices, writing top quality unit tests and code
  • Practising pair programming and understanding the value of peer review as part of maintaining focus on quality
  • Working in agile delivery teams within a product-centric environment, and particularly comfortable with kanban
  • Good communicator, having the soft skills to talk to the business as well as techs
  • Teams will have demonstrable DevOps experience, showing ability to release at least every sprint and ideally much more frequently
  • Continuous cost based analysis to drive efficiencies and reduce total cost of ownership in every aspect of design and development
  • Drive innovation to transform legacy services into modern customer centric solutions which delight the customer. Ideally demonstrating how they iteratively migrate legacy services in complex mixed ecosystem environments.
  • Suppliers should demonstrate how they've identified risks and dependencies and offered approaches to manage them. This needs to focus both on: a. Information Security risk management b. Project risk management
  • Suppliers need to demonstrate a good understanding of Transaction Monitoring capabilities and how they function in one or more industry sectors.
  • Suppliers need to demonstrate a sound working knowledge of the General Data Protection regulation and any challenges this may provide in delivery of capabilities.
  • Demonstrate they will work alongside HMRC permanent staff and other contractors. Suppliers must demonstrate how they have delivered within a. Multidisciplinary teams b. Mixed ecosystem teams c. Multi-site teams
Cultural fit criteria
  • Operates a no-blame culture, encouraging people to learn from their mistakes
  • Able to start work immediately
  • Have excellent communication skills with staff at all levels of the organisation
  • Will take responsibility for their work while also pairing/peer reviewing by default
  • Willing to collaborate and partner, including with other suppliers and HMRC staff at all levels
  • Proactively share knowledge and experiences with members of team, especially with HMRC staff
  • Be innovative and promote ideas and suggestions as applicable
  • Focus on achieving value for money in all activities
Payment approach
Time and materials
Additional assessment methods
  • Case study
  • Presentation
Evaluation weighting

Technical competence


Cultural fit




Questions asked by suppliers

1. Is there an incumbent in place?
HMRC currently have other suppliers engaged in this activity but incumbents receive no preferential treatment or consideration. The successful supplier will be required to work within and alongside teams made up of HMRC and/or other suppliers, dependent upon the proposed solution to the problem.
2. Is there flexibility/choice about onsite location of the work? Will the successful supplier have a choice of working onsite at Leeds only? Or, will there be a requirement to have people across both sites – London and Leeds?
Suppliers must have capacity to deliver at both locations to be considered for contract award. Where possible HMRC may take a flexible approach to location when allocating individual statements of work.
3. If we're putting in a partnership bid, where would you like us to mention our partners – in the bid response at Stage 1 or should we mention that only at Stage 2?
A partnership bid does not need to be declared at stage one when responding to how you meet the requirement criteria. This must be declared in any Stage 2 proposal if you are successful in reaching that stage.
4. You mention ''Case study and Presentation'' as additional assessment methods. Normally at Stage 2, DOS requires us to submit a proposal followed by a presentation. Can you clarify / confirm what your assessment methods for Stage 2 / Assessment will be?
As per the DOS Framework HMRC will request a written proposal as part of the Stage 2 evaluation. HMRC reserves the right to request a case study and presentation from suppliers as additional evaluation methods as deemed necessary. This will be confirmed to suppliers who are successful in reaching Stage 2.
5. Do you have a preference for a multi-supplier bid?
HMRC require a single proposal to deliver all aspects of the requirement. We are not prescriptive, as per the terms of the framework this proposal could come from a consortium of suppliers who come together to deliver (with one lead supplier or a newly formed company, jointly owned by the members of the consortium) or by a single supplier supported by one or more subcontractors. We also welcome bids from suppliers who intend to fulfil 100% of the service through their own capability. Any consortium or subcontractor arrangements must be made clear in the bid information.
6. Please can you confirm how you will evaluate the price element at Stage 2? Your response 2 in Q&A indicates that pricing will be based on future individual SOWs. Will the Stage 2 evaluation therefore be based only upon rate-card, as that is unlikely to give a fair comparison of specific outcomes? Or, given that you are looking for specific approaches, will you be proposing various scenarios for suppliers to price up for evaluation?
Costing will be evaluated based on the pricing of the solution to the scenario in the written proposal. Suppliers will be asked to detail how they would go about delivering the solution, and details of the roles in the team that they would use making up the total price. A rate card will be issued to suppliers to populate and used to assess the breakdown of costs in the pricing solution.
7. Your response to Q&A 4 indicates that you will choose a supplier “dependent upon the proposed solution to the problem”. The usual Digital Market Place stage 2 proposal criteria would ask suppliers to describe such an approach in their proposal but this request appears to be missing. Please can you clarify where suppliers should be outlining their proposed solution at stage 2?
Suppliers at Stage 2 will be provided with details of a scenario on a possible piece of work the successful supplier(s) might be asked to complete as a statement of work under the contract as part of the written proposal. This is where they will be asked to propose a solution.
8. Are you happy for us to provide more than one project example per question as long as it’s within 100 words?
You can utilise the word count in whichever way you feel best evidences your ability with regards to that criteria. We are looking for clear evidence of when you have performed the skills or experience previously. Giving as much detail as you can (restricted word count allowing) into what you did and the impact. Ensure that every part of the criteria is covered in your response.
9. You are asking for no evidential queries around suppliers’ experience of talking over services from existing supplier(s). We assume the incumbent(s) will not be exiting on 1st Jan 2020 (the latest start date). Please can you confirm how long will they remain on-site to permit handover to the new supplier(s) – we assume that some form of handover will form the basis of the initial SOWs that you issue under this framework. Is that correct?
Hand over. We expect a handover period of a maximum of 3 months. This period may vary at the discretion of HMRC but will not exceed 3 months.
10. For the question "Deep understanding of stream based data ingestion pipelines, including a range of physical and software based collection methods." could you give some examples of the physical based collection methods you have in mind?
HMRC has a mix of private and public cloud and physical network infrastructure. As a result event pipelines need to consider both software based solutions which can operate in software defined networks, and physical components more suitable in traditional physical environments. Examples of physical network stream capture might include physical span ports or network taps coupled with a physical network streaming of that traffic to a physical disc cache.
11. How many staff the supplier is expected to provide? Any breakdown of skills which you can advise at the moment?
Initially we are looking for approx. 30 staff covering the full range of scrum roles including DevOps and data scientists. The numbers in each particular role will vary. In addition txm is expanding so we expect to increase demand in all roles over the period of the contract.
12. You mention ‘Significant, similar contract with a public sector body- – are you looking for size / capacity or contract value?
We are looking for evidence of successful delivery of extended periods of this type of contract within the public sector. Scope , scale and value of that evidence may be considered as part of the weight of that evidence.
13. Splunk cluster administration – could you elaborate on what you’re looking for in this question please?
Splunk cluster administration- txm operate a significant Splunk cluster. Splunk is a COTs software product. We are looking for evidence of experience in the live service operational and infrastructure management of a Splunk cluster.
14. Are you looking for experience in ‘Official’ levels and higher OR ‘Secret’ levels and higher?
The txm platform is gpms official sensitive. We are looking for evidence of experience of developing and maintaining systems to minimum official sensitive. Similar evidence of building and maintaining systems at higher classifications will also be considered.
15. What stage is the service in terms of the service standard: discovery-> alpha -> beta -> live?
16. What % of the service workloads are running on each cloud iteration? All on the new or a mix spread between 2018, 2016 and 2014 versions?
All TxM services operate on the latest cloud iterations available
17. What is the link between transaction monitoring system and machine learning? Where is the data going to come from – is it from the transaction monitoring system?
TxM data repository is used to collect and store customer activity data. We have then built an ML environment where TxM data can be sourced via persistent historical and live event sourcing pipelines into the ML environment. We then use ML in a range of functions for fraud detection, categorisation, and in future or Business Intelligence functions.
18. Is HMRC trying to create some predictive models or is there any additional data engineering required to do that?
Both. We are creating predictive models. However their scope is limited due to low levels of event normalisation and ontology. We are developing operating models to iteratively improve this picture at a strategic and operational level.
19. Is HMRC looking only at development environments or looking to productionise the ML models that you are developing on R studio, Sagemaker and Jupiter notebooks?
We already productionise some of our ML models. We are developing more persistent deployment pipelines for ML models and plan in the very near future to increase the number of live models we deploy and use.
20. Could you please describe your Scala Apps in terms of business function, application complexity and number of integrations.
We use Scala apps to deliver custom search and UI features. These are used to generate customer activity timelines and views for use by a wide range of business functions including compliance, fraud prevention, criminal investigation, cyber security, and business intelligence.
21. Could you please describe amount of Postgres, Mongo databases and their average dataset size.
We have multiple Postgres and Mongo DBs. They range in size from small summarisations to large (multi terabyte) solutions. We tend to make technology choices based on suitability to a given problem rather than limiting choice to a specific preferred product.
22. Could you please describe your ELK solution in terms of modules, customisations and average dataset size, explain you deployment topology.
We use Elastic for persisting a number of smaller data sets, such as the UK address base dataset. These are typically supporting functions to the main activity dataset. We use logstash and kibana for service and performance monitoring.
23. How many Jupiter Notebooks do you have and what are their complexity ?
The number of notebooks varies over time. At present we have between 6-10 notebooks in play at various stages. Some of these are also being developed by our Data Science user community on our platform. At present most of the notebooks are relatively trivial. We are actively developing a range of features using this capability which will enhance our ability to determine particular characteristics. This will increase as planned iteration of our ML platform delivers multi-tenant functionality and opens our platform to new user communities.
24. You gave a team composition, do you expect that selected supplier should provide a full team or you plan to mix your and supplier personnel within one team?
We operate a multi supplier model with blended teams. We expect two or more suppliers in play at any one time working alongside our own staff as part of blended delivery teams.
25. Can you provide statistics for the incidents and change requests ?
We experienced number of incidents/bugs over the last year. These ranged in complexity from minor bugs to full system outage (following failed change regression) which lasted 36 hours, and caused 1 month clean-up effort. We typically release features multiple times per day and ideally avoid major complex releases. We operate continuous delivery methodology. We don't work to traditional change request models, except where major changes are commissioned by transformation programmes. We've had two such requests in last year, both requiring growth in teams. We expect this growth is highly likely to continue as more demand placed on resources and capabilities.
26. What is your strategy regarding Splunk and Elk usage, do you plan to settle down with one solution ?
We expect to continue to use ELK in its current roles. We plan to migrate away from Splunk in 2020, in favour of S3 storage for our core event repository.
27. Could you please describe your Splunk solution in terms of modules, customisations and average dataset size, explain you deployment topology.*

*Answered in two parts due to word limit (Part A)
We currently use Splunk as a core event repository. We stream events to Splunk using Splunk forwarders, Trafik, or Rabbit MQ depending on the source. We operate multiple resilient indexes (currently 7). We use Splunk native search capability. We design and develop custom Splunk dashboards for use by business teams. We operate regular Splunk batch searches which scrape key events from Splunk, normalise and summarise them in PostGres, then build high performance custom Scala UIs which are integrated to Splunk as custom apps.
28. Could you please describe your Splunk solution in terms of modules, customisations and average dataset size, explain you deployment topology.*

*Answered in two parts due to word limit (Part B)
We currently ingest between 750GB and 1.5Tb per day, typically ingesting and indexing within 1 second of the event. Our total cluster covers 5 years of history and amounts to approx 0.5Pb of data. We do not use any additional Splunk modules.
29. The new Digital Outcomes and Specialists Framework contract (DOS4) removed the liability cap for Data Protection legislation breaches and as part of clarification, it was stated that it was a decision for buyers at the call-off stage as to whether to include a cap. This is potentially a major issue for a lot of suppliers. Will there be any access to personal data as part of this contract and if so, would you be open to agreeing to a reasonable cap on liability at the contract negotiation stage?
There will be a requirement to access high volumes of personal identifiable information data held within our environments. As a result all personnel with direct unsupervised access are required to hold or be able to achieve SC clearance. HMRC’s standard position in respect of Personal Data or GDPR is to require the Supplier to indemnify HMRC, without limit, against all losses, fines and/or expenses arising in connection with any breach on the part of the Supplier (or Sub-Processor) of any relevant obligations. This encompasses both GDPR regulatory fines and other potential litigation and charges arising from breaches of personal data.