Design: Umoderated UX Test

A guide to build and run an umoderated usability test with remote participants. Access Github Repo.
product-design; UX; research; remote; usability;
Checks are saved in your local storage

1. Build and test your prototype

  • Pick one device or viewport to build the prototype on.
    • Consider the context your users are likely to be in and which device they use more often. User behavior data can help you with this decision.
  • Keep the prototype scope as narrow as possible.
    • Focus on the main questions you want to answer and don't build too many different flows into it.
  • Choose the prototyping and testing tools that are most suited for your experiment.
  • Write the task instructions (either on the UI itself or on the testing platform you’re using).
  • Optional: build a short form asking for further feedback from participants after the test.
    • People are less likely to think aloud when they’re going through a task alone. 2 or 3 questions may help you get inside the participants' minds.

2. Test your prototype for glitches

  • Send your test link to some coworkers and ask them to follow the instructions you wrote, to make everything is working properly.
  • Mark the duration of the test, and use it as an estimate for letting your real participants know how long it might take.
  • Get feedback on the prototype, the test instructions, your posture as a mediator, and technical aspects.
  • Fix glitches and make all necessary adjustments.

3. Source participants

This can be a task for the designer or other stakeholders (product owner, CEO, CMO), depending on the organizational structure you’re dealing with. Participants can be current users or part of the product’s target demographic.
  • When sourcing participants, try to stay as close as possible to your real user base.
    • Watch for biases that might skew your test results. I.,e. you don’t want to have only people who are young and tech-savvy if your user base is mainly composed of boomers.
  • Make sure all your participants will be able to do the test on the device you’ve designed it for, and are familiar with it.
  • Source 10 to 20 participants.
    • Take advantage of the scale possibilities of an unmoderated test. Also, keep in mind that not all of them will actually do it, so it’s nice to have more people than you need.
  • Make clear to participants that they will not be the ones under scrutiny.
    • What is being tested is your product, and their honest feedback is the primairy goal of the test session.
  • Give the test participants a time frame.
    • They will be able to do the test in their own time, but it’s not efficient for you to wait too long between answers. In our experience, one week is enough for everyone to find some time to do the test.
  • During that time frame, be responsive and check your email often.
    • Glitches may occur during tests. If any participant reports them through email, it means they were engaged in the assignment. Find out what happened and try to fix it.

4. Analyze test results

  • Don’t wait too long to analyze your test results.
    • The ideal is to do it one or two days after the test period has passed.
  • Check how many participants did the test and calculate how long it will take to watch all of them.
  • Schedule twice the time with other stakeholders (project manager, product owner, tech lead) to watch and analyze them together.
    • The impact of getting instant feedback from real people on a product still under development is powerful and can help the entire team make better decisions about it.
  • Take notes for each participant while you’re watching.
    • We use this template to rank task completion and consolidate insights quickly.
  • If you added a feedback survey at the end of the test, analyze the responses as well.
    • This step doesn’t need to be done together with all the stakeholders but must be consolidated as part of the test results.

5. Follow-up

  • Send an email thanking participants personally for their help.
    • Be thoughtful and reference some input that they actually gave you during the test. You can pick up notes from the spreadsheet for that.
  • Use your real-time records to identify points of improvement in the UI.
    • As a rule of thumb, if more than 30% of participants had difficulties in the same point of the test, the UI needs to be fixed.
  • Present your discoveries to the development team, and to stakeholders that weren’t present during the process. Make new product decisions based on it.
  • Track any UI adjustments as new tasks, and prioritize them on your sprint with the Project Manager.
  • After the feature is live, send a follow-up email to your test participants, showing them how the product has changed or improved based on their insights.

Yay! You completed the checklist top to bottom!
Now spread the ❤︎ by thanking the author, making improvements or creating you own checklist!