How we create and manage content on our website
User testing
Testing content with real users can give you valuable insights into:
- how people understand and interpret content
- whether people can find the content
- whether the content meets the need you've identified
When to do user testing
You can test with users at different points in the content lifecycle. You don't need a finished product to get feedback from users. You could test with:
- a paper prototype (sketch) of your content layout
- paper prototypes of several pages in a user journey, to get feedback on how users navigate through content and what they expect to see
- a draft information architecture, such as a spreadsheet or site map
- a list of titles or headings
- a search engine, by asking people to search for certain content by typing or speaking
- a prototype webpage (for example, using software like Figma) where you can ask users to perform certain tasks
- a live webpage
How to test content with users
You can test content in many different ways depending on what you need to find out (your research question). Testing can involve listening, observing and asking questions. Make sure that your users can participate fully in testing. If you are using a prototype, check in advance that users can access the software.
Content testing often involves:
- usability testing - asking users to perform a certain task, such as finding out when their next bin collection is
- accessibility testing - testing with assistive technologies or carrying out an accessibility audit
- card sorts - asking users to group content, tasks or themes to help you create an information architecture based on their needs
- highlighter testing - asking users to underline words or phrases in different colours to help you understand how it makes them feel, for example, highlighting words in red that make them feel anxious or unsure
- A/B testing - asking users to read or interact with two different versions of a page
You can create your own test or use online testing software such as Maze.
User testing checklist
Good user testing involves a lot of preparation. It's important your users feel relaxed and are not worried about 'getting things wrong.' Ideally, you'll create an environment where users can behave and react as they would in their normal life.
If you are testing remotely, practise the test in advance and make sure your links work. Some users may be new to the technology you're using so try to make sure that everything works smoothly on the day.
This is adapted from a checklist used by one of the service designers who worked on the Buckinghamshire Council website.
1 or 2 weeks before testing
1 or 2 weeks before testing, you should:
- define overall research aim or question and communicate with team
- agree research approach in line with research aim
- confirm date of testing and follow up findings workshop (if needed) with client and team
- confirm incentives (such as payment, catering or vouchers for testers)
- confirm observers (the people who watch and record user behaviour)
- agree recruitment criteria with team (what kind of users you want to test with)
- send out consent forms to participants
- create prototype
- create test script, based on objectives
- set up the test on Lookback, Optimal Workshop or Maze (user testing platforms)
- send out links to observers, and get observers to participate with note taking if required
- pilot test (check timings, questions and technical issues)
- back up tool if users can’t or won’t use the testing platform
Day before testing
The day before testing you should:
- check all links are working in prototype
- reminder message to participants the day before
After testing
After testing, you should:
- collect notes from observers (if any)
- enter findings into a collaborative workspace (such as Miro) and actions into Trello (project tracking)
- sort out incentives and expenses
- create insight report of recommendations if needed
- present insights report
Iterating your content based on feedback
The best way to build good content and services is to start small and iterate repeatedly. After testing with users, iterate live content by adding features, deleting things that don't work and making refinements based on feedback.
Iteration reduces risk. It lets you learn from small failures and avoid big ones.
You might need to iterate your content several times during the design process. This can be based on feedback from:
- peer review
- group crit
- user testing
It could also be as a result of new user research, changes to policy or service information or changes to other content that is part of a related user journey.
Iteration is a good thing. You can always learn more about user needs. You might have drafted your content, been through a group crit and peer review, then discover in user testing that your content is not solving the right problem for users. You might have to go back to the beginning and do more research, or you might need to make small changes. Content design is not a linear process.
Improving your content
Even after your content is published, you can continue to iterate based on feedback and data about how users interact with your content (such as Google analytics data). It's important that you keep learning and making improvements so that your content meets user needs.
Whenever you iterate your content, you should ask for another peer review (unless you are making very small changes, such as a change to a date or deadline). It is important to check that you are still meeting user needs.
Help and support
Contact the Web Team at [email protected] if you need support to:
- plan user testing or research questions
- recruit users
- make a suggestion for something we need to include in this guidance
You can find more guidance on user testing in the content design resources in the GOV.UK Service Manual.