I’m overdue a book list. This is what I’ve been reading over the past few months (stretching back to the end of 2018). The rest of this list is books I’m planning to read over the next few months. Dare […]

Original source – Ben Holliday

A woman sitting at a table using a tablet device. The page is loaded to the "renew a tax disc'" service

In 2013, I created the Digital Inclusion scale – a 9-point scale measuring digital capability- as part of the then government’s Digital Transformation Programme and the Digital Inclusion Strategy.

The intention of the scale was to help service teams see the breadth of diversity in their users and appreciate the range of digital skills and capability users had. It was also intended to identify potential opportunities to help people use government services

Six years on, the scale has become one of government’s most ubiquitous tools used to highlight people’s digital capability and has:

The digital inclusion scale - it ranges from 1 to 9. 1= never have, never will. 2= was online, but no longer. 3= willing an unable. 4= reluctantly online. 5=learning the ropes. 6= task specific. 7= basic digital skills. 8=confident. 9=expert

Why 9 points and not 10?

It was developed using evidence from GDS’s digital landscape research, data from the BBC and data from the Office of National Statistics (ONS).

There were 10 points on the scale when it was first drafted but after point 9 anything beyond felt meaningless and unrepresentative of the 20% of the UK population who lacked basic digital skills.

Each point on the scale highlights the range of digital capability from those who had consciously decided not to use the internet and therefore had never been online and never would, to experts whose primary income came from using online services.  

Basic digital skills, as defined by digital organisation GO On UK – which has since become doteveryone – was deemed to be the minimum capability that people needed to have in order to use the internet effectively.

How we developed the inclusion scale

In the early days, we did not really know who would use the scale or how it would be used, especially in the context of designing government services. It was a case of trial and error when testing out the feasibility of the scale.

At first, the idea was simply to use the scale as a way to assess the potential digital capability needed to complete certain government transactions. We focused on testing out the scale with teams working on the 25 ‘digital by default’ exemplar services.

We asked service owners to review their services, identifying the things people needed to do to achieve a successful outcome. We asked them to consider:

  • the levels of complexity involved
  • the potential time that would be needed to get something done
  • whether more than one person needed to be involved in completing the task in hand

It did not take long to realise 20% of the UK population would struggle to use some of the exemplar services because they lacked basic digital skills.

Back to the drawing board

After testing the scale with service managers, we quickly learned that others would benefit from using it if we provided some simple instructions on how to use it.

So, we asked user researchers working on the exemplar services to plot onto the scale the observed digital behaviour of the participants that they saw in their research sessions.

To avoid under and over estimated capability, we specifically requested they did not plot the digital skills the participants said they had, but focus on what the users did.

Being able to physically plot and visually highlight the gaps in user group representation raised interesting questions as to how the scale could be used to help teams be more inclusive in their recruitment activities.

An example of the scale used by the Department for Work and Pensions on Carer's Allowance. The scale shows the majority of users ranked on the scale between 5 and 7 - below the basic digital skill level

The Department for Work and Pensions used the scale to test the Carer’s Allowance service

What we learned about scaling the scale

As the usefulness of the scale became apparent and the appetite to use it increased, so did the need to iterate and provide firmer guidelines.

The benefits of the scale were easy to see. Teams could:

  • quickly see the breadth of digital capability people had, against the skills required to use the service
  • easily see gaps in their recruitment and use that to help refocus their recruitment efforts
  • talk confidently about research, recruitment and users in service assessments
  • have evidence-based conversations around potential options for assisted digital support

However, several factors have impacted on the ability to update and improve the scale.

For instance, with no defined axis criteria it lacks the ability to capture the nuances of attitudinal behaviour, accessibility, emotion, literacy and access to technology. And as the scale did not "belong" to a specific team, further iterations have been limited.

But currently there are no plans to develop the scale. Its simplicity has perhaps ensured its longevity, at least for now.

It’s encouraging to see it’s still being used in a variety of different ways and I’m very proud to have worked on it.

Have you used or adapted the scale? Do you have any insight to share? We’d love to hear from you!

Subscribe to this blog.

Original source – User research in government

what is content design.jpg

What is content design? And would you like to work in it?

by Joanna Goodwin

Content design is one of the User-Centred Design (UCD) professions in government, alongside service design, graphic design and user research.

Content designers are responsible for creating, updating and reviewing content. They work around the end-to-end user journey and are comfortable using evidence, data and research to inform how content is structured. A content designer contributes to and uses the style guides and design patterns.

Read about the importance of content designers in Government on the GDS blog.

Content Design at the Office for National Statistics (ONS)

At the Office for National Statistics, we have two content designers and are recruiting for more!

Our content designers work in multidisciplinary teams to scope, write, edit, proof, clear and publish content that meets our diverse user needs. They work to ensure the content fits the user journeys and preferences, for a range of channels including the ONS website and online data collection points. They also research, develop and analyse audience insight to ensure all content is effective and constantly evaluated.

Kieran Forde, one of our content designers works on the corporate website to continuously improve statistical outputs including our stats bulletins. He has worked to make statistical bulletins shorter and clearer to the reader so that statistics can be better understood (quite important during an era of ‘fake news’ and uncertainty).  Kieran has worked to not only shorted the amount of content on a page, but to ensure there is clarity in what is being said. Keep up to date with Kieran’s work on our digital blog.

How does this fit with comms?

Many communications professionals working in digital communications love words and would thrive reviewing and improving digital content that makes a real different.

If you are a fellow ‘word nerd’, are passionate about clear written communications, or have a thirst for improving the standards of government content, why not think about a career in Content Design?

Interested in a job?

If you are interested in a content design role, you can find out more about what a content designer does in the DDaT job role profile.

We are currently hiring a content designer for the 2021 Census in ONS (Closing date: 27 Feb 2019). This truly unique work reaches every household and individual in England & Wales, and your work will help shape public policy and service provision for at least a decade. Apply for the content designer role on the Civil Service Jobs website.

Find out more

For more information on content design in government, view the GDS Blog.

Joanna Goodwin is delivery manager, business change, at the Office for National Statistics. You can connect with her at @joannagoodwin3

Image via NASA on The Commons

Original source – comms2point0 free online resource for creative comms people – comms2point0

GOV.UK Incident Report

We’ve posted before about what happens when things go wrong on GOV.UK, and how we classify and prioritise incidents.

Every incident teaches us something new about our technology or the way we communicate with each other. It also gives us the opportunity to improve our incident management process so we can minimise disruption to users in the future.

In May 2016 we committed to blogging about every severity 1 or severity 2 incident, as well as severity 3 incidents if they’re particularly interesting.

This post is a roundup of 2 incidents that GOV.UK’s data analytics encountered in February 2018. These were severity 3 incidents – we’re blogging about them to show how we responded to cases affecting our analytics data.

12 February 2018 – error in Google Analytics page tracking

What happened

Between 12 February and 20 February 2018 the data on Google Analytics that identifies pages viewed on GOV.UK contained full URLs, including the host name. The data should just contain the short form URI. For example, instead of /vehicle-tax, the data contained https://www.gov.uk/vehicle-tax.

What users saw

External users of the site were not affected.

Internal government users of Google Analytics data saw the different version of the ‘page’ dimension in their reports. This would have been confusing. Some users might have seen a drop in views for the pages they were reporting on if their report had been configured in a way which excluded the long version of the page address.

What caused this

Work to clean the page information being sent to Google Analytics (by processing the URI to remove irrelevant extra data) had accidentally resulted in the processed data including the host name at the start.

How we responded

The issue only caused an obvious problem in certain configurations of analytics reports. Because of this, it was 3 days before someone spotted the issue. The problem was fixed the next day. Internal and departmental users were notified.

Steps taken to prevent this from happening again

The process for checking changes to tracking is being improved, including liaison with the analysts.

We are making changes to the way we process data from our integration and staging servers to make it easier to monitor changes.

We are developing automated processes for checking the quality of our analytics data.

20 February 2018 – error in Google Analytics metadata tracking

What happened

Between 20 February and 8 March 2018, the values for a subset of the metadata about page views and events sent to Google Analytics and stored as ‘custom dimensions’ were set as [object Object] instead of the correct value. Custom dimensions are attributes of page visits that have been defined by the Google Analytics user, who is often a data analyst.

What users saw

External users of the site were not affected.

The metrics of departmental Google Analytics users were affected for 17 days. For a subset of pageviews and events, all custom dimensions were recorded by Google Analytics as ‘[object Object]’.

In particular, departmental users rely on the ‘Organisations’ custom dimension to filter their reports to their own organisation’s content. From 20 February to 7 March, 4,388,684 page views were wrongly tracked as ‘[object Object]’ instead of their correct organisations – 3% of the total 143,302,095 pageviews that had an organisation custom dimension.

What caused this

Due to the way Google Analytics code is injected into GOV.UK pages, it’s often not included within our development environment. In this instance, this resulted in the failure of code that relies on access to Google Analytics libraries.

We currently include Google Analytics code in 2 separate places within GOV.UK’s codebase, and there were subtle differences between the 2 entries. When attempting to make these consistent, a bug was introduced which incorrectly set the values of custom dimensions to ‘[object Object]’ in one of the locations.

How we responded

The issue was noticed by internal analysts within 3 days, but was believed to affect only one part of the site and a partial fix was deployed. On 8 March a departmental user pointed out the issue was affecting other areas and the issue was fixed the same day.

It took so long to notice the extent of the problem because there are no automated tests on Google Analytics code changes and no formal process for checking changes. Changes to the tracking have become more frequent since 2017, but no process had been established
because such changes used to be very rare.

The issue was not obvious without formal checking because it was intermittent (affecting about 3% of hits). Because correct data was mostly being sent, the issue did not stand out in reports at first.

Steps taken to prevent this from happening again

We are adding automated testing on analytics code changes.

We’re recommending analysts should notify tracking issues via established channels to our second line support.

The most important lesson from this incident, and the associated one from 12 February 2018, is that we must recognise changes to the analytics code are now much more common than they were before 2017, and so we need to establish better processes.

Subscribe to updates from this blog.

Original source – Inside GOV.UK

What it means to move to a design mindset A mindset is how we respond to the world around us. It acts as a center of gravity. It goes with you. It shapes who you are and what you do, wherever […]

Original source – Ben Holliday

LocalGov Digital is the association for professionals working in and around local government digital, of which I’m currently Chair. We’re the voice of local government digital practitioners and offer a range of services to our members for free, to aid collaboration between councils to help deliver better, cheaper services. One such service is Pipeline.

Back in 2013 I wrote about a Kickstarter for Local Government.

The idea was to kick start collaboration between councils by creating a platform on which could councils could publish details of the projects they were working on, and update people as they progressed. Its aim was to stop duplication; doing the same thing separately in hundreds of councils across the country, and therefore reduce waste and cost across the sector.

In 2014 I created the first iteration of Pipeline built in C# on an open source wiki, and quickly roped in Ben Cheetham to help develop it with me.

The initial response was great, with over 50 councils signing up to the alpha and adding their projects, and whilst Pipeline demonstrated a desire for collaboration across local government, a gradual decrease in its use also showed that without continued development and support, initiatives such as this won’t achieve their full potential.

Doing the hard work to maintain interest in an initiative was the theme of my first ever blog post and without resource to keep promoting Pipeline, in 2016 and 2017 use declined further.

In 2018 Pipeline was resurrected by Hackney, who invested their time and money in bringing it back to life, developing it to include portfolio management too. Pipeline’s activity feed used to show sporadic updates over a month or so, now some days it just about fits in the activity of one day.

Which brings us to 2019, where the Ministry of Housing, Communities and Local Government (MHCLG) have committed up to £80k to develop Pipeline further. If you’re a member of LocalGov Digital then I suspect they’ll want to hear from you, so keep an eye our website for any opportunities to get involved in shaping Pipeline.

From a side project to create a tool to aide collaboration, to a service used by councils to collaborate on their portfolios of work, I’m enthusiastic about the future for Pipeline. The more councils get involved, the more value to the sector, so if you haven’t already, please do and help LocalGov Digital to facilitate the creation of better, cheaper local public services.

Original source – Lg/Www

Here at dxw digital, we love Gov.uk Notify. It’s a web service provided by the Government Digital Service (GDS) allowing the whole public sector to send emails, SMS and even letters. It’s free for email, and incredibly low cost for SMS and letters (your first 250,000 texts are free- brilliant for cash strapped services, who would otherwise have to use something like Twilio at 4p per SMS).

It’s really easy to integrate with, and there are a whole bunch of well-supported libraries for pretty much every popular programming language. In this blog post I’m going to go through a technique we used to make working with Notify even easier with our current tech stack. If you’re not technically minded, feel free to read no further. If you’re a developer (Ruby or otherwise), then read on…

We’re currently using Notify on the Teaching Vacancies Public Beta to allow applicants to receive alerts on new jobs. Notify is a perfect service for us to use, but we found ourselves with a couple of problems.

Notify provides support for some rudimentary templates. We can enter the email’s text and add variables in double brackets for personalisation – for example, if we enter ((first_name)) in the template, ((first_name)) will be replaced with the user’s first name when we send the email.

However, Notify’s templating language does not provide functionality for loops, so we can’t send multiple vacancies to the service and tell it to loop through them, outputting, for example, a job title, school and a link.

This led us to handle the logic in the Rails app itself. We built a simple template in Notify, with just a ((body)) variable and could then build a class that would take in the vacancies, build the email body and then send it as a variable to Notify.

This works pretty well, but as Teaching Vacancies is built in Rails, we lost some of the advantages of working with emails in Rails. Out of the box, Rails provides ‘mailers’, which are classes with methods that take in variables, build an email body using a templating language such as  ERB  or Haml (much like how Rails handles webpage views), and then send an email.

Rails mailers can also be hooked into ActiveJob (which is Rails’ framework for queuing and running background jobs). This means when a user does a particular action, we can tell a background job to send an email, rather than the web application itself. This speeds up the user experience, and also allow us to retry if the initial action fails for some reason.

With this in mind, I decided to have a look and see if we could hook Notify into Rails’s mailers. The framework that provides support for Mailers in Rails, called ActionMailer, gives us the ability to specify a delivery method for your emails.

Out of the box, Action Mailer provides support for four delivery methods: SMTP, Sendmail, File (which saves emails to a file), and Test (which pushes emails to an array, which we can inspect when writing tests). Moreover, you can also write your own delivery method. This is where I decided to make a start.

A delivery method only needs two things:

  1. An initializer, which initialises the class and takes some settings
  2. A method called ‘deliver!’ which actually sends the email.

A very simple mailer could look like this:

We can then configure our mailers in Rails to use our delivery method, like so:

This then makes it very easy to, rather than printing the mail in the case above, call an API. In our case, we could grab the details from the mail and call the Notify API like so:

This would pretty much work without any customisation to the ActionMailer mailers, but the one thing missing is how we tell Notify what template to use. When using ActionMailer, a mailer usually looks something like this:

ActionMailer is very permissive, in that we can add any parameter to our call to mail, and the resulting mail object in our delivery_method has access to it. For example, if I called:

When I called deliver! in my delivery method, I would have access to foolike so:

If I wanted to then I could just add a template_id to my call to mail, and that would be that, but as things would break if we didn’t specify a template ID, I wanted to have template_id as a required parameter, so I built my own mailer, which inherits from ActionMailer::Base:

This accepts two arguments, template_id and headers, and then merges the arguments into a single parameter and calls mail. This may seem like overkill, but at least gives us the confidence that we’re going to have the template ID when we come to call the delivery method, and also gives us more scope for customisation later.

We can then use the mailers in our Rails application like so:


You can check out the readme on Github here (some of the code in this blog post is simplified for brevity), and it’s published to RubyGems, so you can use it yourself today. Feel free to submit bugs, PRs etc, and let me know how you get on with it if you use it anywhere!

The post Notify and Rails – sitting in a tree appeared first on dxw digital.

Original source – dxw digital

As part of our work developing the use of digital participatory budgeting (PB) in Scotland, Demsoc is shining a light on some of the innovative ways other people across the world use digital PB. We recently shared a blog post about digital PB in New York City. This time we’re doing things a little differently […]

Original source – The Democratic Society

Sarah Stewart with Oliver Dowden in his Whitehall office

 

In the latest episode of the GDS podcast, senior writer Sarah Stewart talks to Minister for Implementation Oliver Dowden

He explains his passion for emerging technology, his plan to encourage its adoption in government and how teaching in rural Japan helped equip him for the role.

They also discuss the highlights from the past year, his work on the emerging technology strategy and his surprising first job in tech.

You can subscribe to the GDS podcast on Apple Music and all other major podcast platforms.

You can read transcripts of all our podcast episodes on Podbean.

Subscribe for blog updates.

Original source – Government Digital Service

Hiya, I’m Katie. I started at dxw digital last month as a user researcher. I’m originally from Nebraska, in the States, but I moved to London eight years ago with my family.

I’m a bit of a science geek and studied a degree in Biology at Queen Mary University of London. I focussed on a mix of behavioural psychology & memory and digging up dinosaurs. After graduating, I joined the civil service fast stream, where I fell in love with user research.  

I’m a passionate foodie and coffee addict (as well as an aspiring food blogger), who’s always up for trying new places. I’m an avid reader. I love fantasy and horror books (despite being terrified of any remotely scary movies). My goal for the year is to read every book ever written by Stephen King.

I’m really happy to be at dxw digital, furthering my skills and learning from some truly amazing user researchers! I’m particularly passionate about accessibility and assisted digital research. I’m excited to be working with dxw to help public sector organisations design services that are easy for everyone to use.

 

The post Introducing Katie appeared first on dxw digital.

Original source – dxw digital