I work in MoJ’s User Centred Policy Design team. In spring 2018, the team and I started to look at the experience of people with legal issues and explore what support they needed to prevent their problems from escalating.
This blog post is about our journey on this project and what we achieved one year later.
From user research to strategy
To understand what people with legal problems do to solve their issues, and what blocks them from resolving their situation, we observed sessions at advice centres, ran in-depth interviews with advisors and surveyed people with legal problems.
As a result, we mapped the journey of those people and collected insights on the main areas to improve when delivering legal support.
We shared this user-centred evidence with decision-makers and it influenced the government’s Legal Support Action Plan.
From commitments to prototyping
One of the commitments in the action plan was to better coordinate and signpost legal support services. This became the next brief for our team. We set out to define and inform what an effective pilot for online signposting could look like.
We had high ambitions. We initially explored the possibility of testing a solution for any legal problem. We sketched some early ideas and used them to probe conversations with subject matter experts and legal advisors. This helped us identify which concept had the most potential to help people with legal problems.
To develop the preferred concept, we focused on one legal issue: housing disrepair. This helped us gain a deeper understanding of how, where and in what format signposting could help. It also enabled us to build a robust model that could be upscaled to other areas of law if proven successful.
Once again, we went out to research and this time, we focused our efforts on understanding signposting in housing disrepair.
Based on research findings, we prototyped an online service and tested it with members of the public and expert advisors.
The service aimed to mitigate the risk of getting stuck when trying to resolve a legal problem by providing people with the right advice at the right time. We used nudge techniques to make people feel listened to and we created components to provide the reassurance people need to diagnose a legal issue and feel empowered to take action. For example, we asked users if they have children, because we knew from research that parents want to disclose this information when receiving support, even though this is not relevant to the guidance given.
From prototyping to piloting
We documented the feedback from the first round of usability testing, prioritised the most pressing issues, and wrote up potential solutions that we brainstormed as a team.
Before we do further iterations, we need to address questions about product ownership and resourcing as well as pass the solution through ministerial and other policy clearances.
In the meantime, we are discussing evaluation methods. In digital terms, a service is launched and then iterated as quickly as possible through alpha and beta phases, using feedback collected from users at each stage. Policy teams, however, tend to design pilot projects fully in advance, let them run for a year or more, and then conduct a robust statistical evaluation to determine how effective the chosen method was. The two approaches to testing and learning are very different, each with their own pros and cons.
One year after starting our journey into this policy area, these are our key achievements:
- planting another seed of user-centred policy design and supported its growth
- we worked with Policy to develop a user-centred strategy
- helped Policy test potential solutions before committing to a particular approach
- we used user needs to inform and design a policy pilot
- encouraged Policy to consider embedding quick feedback loops in policy pilots.
This work has followed a dream process of user-centred policy design: from user research to strategy, from commitments to prototype, and now from prototype to (hopefully) an agile pilot.