Just over one year ago we began working on a service for the HMPPS to support a new policy aimed at reducing reoffending. My fellow researcher on this product, Gemma Hutley, wrote a piece on some of our early research inputs and challenges, with a focus on how to conduct user research if there are no current users.
In this follow-on piece, I want to shed some light on our service rollout activities and how user research has informed this process.
The prison estate in England and Wales consists of 120 separate entities. They share many characteristics but there are also a surprising number of differences. This variation is driven by numerous factors, including the prison’s security level, its main function, gender and age range covered and whether it is publicly or privately run. In addition to these quite obvious differences, our site visits discovered additional ones which we realised could adversely affect the suitability of the service we were building. Short of visiting each prison (we did go to 24 in total), we decided to conduct a ‘remote’ estate analysis in order to identify any potential show-stoppers. This raised two challenges: how do you search for something without knowing what exactly you’re looking for and how can you ever be certain about the absence of a problem – you may just not have looked hard enough or in the right place!
The way we approached this challenge was to list the key factors our service depended upon and to spell out the assumptions which we had made. We then brainstormed how best to test these. This led us to contact a range of head office staff who had a unique knowledge of specific aspects of the prison estate and conduct targeted conversations with them. We also spoke to senior prison management and consulted detailed prison statistics.
Finally, we reached out to a handful of individual prisons to clarify their sometimes unique setup, processes and ways of working, for e.g. HMP Peterborough the one prison holding both male and female inmates, HMP Parc one of the few prisons holding both under and over 18-year-olds. By the end of this process, we were reasonably confident that we would not encounter significant issues related to prison variations.
The aspiration for our service was that end users should be able to use it without training. User testing showed this was achievable. However, the initial setup requires specific information to be added to one of the systems our service relies on. We decided to create an onboarding guide to ensure a smooth rollout with minimum support needs.
Following best practice, we user tested the onboarding guide to make sure it worked for users. We learned a lot during these sessions, how people read our materials, how they use their systems in their own ways and how even IT savvy people can sometimes struggle with quite simple things. Conducting user testing was an invaluable exercise. We were able to iterate on the content having seen what worked and what didn’t, as well as learn more about the shortcuts people use.
We used journey mapping extensively on our project, mainly to understand processes we are not familiar with or to define how something should work. As we were working through the details of onboarding, our service designer mapped the onboarding process. Creating a journey map for our own internal process was enlightening. It really helped us understand the dependencies and ensured we got the job done efficiently.
One of the benefits of having a service used in the real world is the user feedback that can be obtained. For the first time, we can now hear from real users interacting with the service for real in their actual context.
We designed a feedback process and agreed it with the first two prisons that went live. However, we quickly ran into problems, with our survey link not being accessible to all prisons and urgent custody events taking precedence over our feedback requests. We have now scaled back the cadence of the feedback rounds and have also placed a feedback link within our service allowing users to share their experience with us at any time. We are collecting, categorising and prioritising the feedback in a spreadsheet and discuss this as a team on a weekly basis, creating backlog tickets as appropriate.
We have staggered the rollout to the 120 prisons, with most of them onboarded prior to the policy live date of 1 October 2019. With the development team working full out, the remainder of the product team has been taking turns in providing rollout support and tackling the next backlog items. Rollout support has turned out to be quite time-intensive. Most queries relate to setup issues, clarifications on rollout planning and requests for exceptions.
So, what’s next? Our ‘baby’ has come a long way, from initial concepts and lists of user stories to a live service. The rollout is nearly completed (internal issues have paused the rollout to a number of prisons) and bugs are being reported and resolved. We must not lose sight that we’ve ‘only’ launched an MVP, some key functionality is still outstanding and some of our early user research insights have not yet been translated into designs. In addition, the live service user feedback is already generating further backlog items.
There is still lots to do, we may never be entirely finished. We certainly have learned a lot along the way which we’ll take with us to tackle our next challenges. But for now, we hope to have contributed a little bit to help offenders rehabilitate and to make prison staff more effective and efficient in some of what they do.