How to Conduct Usability Testing: A Step-by-Step Guide to Improving UX Through Real User Feedback

0 0 0 0 0

📙 Chapter 3: Running the Usability Test

🔹 Introduction

After setting clear goals, preparing a solid test plan, and recruiting the right users, it’s time to conduct the usability test. This chapter focuses on the actual execution of the usability test, covering moderation techniques, how to observe user behavior, capturing data, maintaining neutrality, and ensuring ethical and effective sessions.

Your job during a usability test is to facilitate—not influence—so participants can naturally interact with your product. The more natural and unbiased the session, the more authentic the feedback.


🔹 The Usability Testing Workflow

General Phases of a Usability Test Session

  1. Welcome and Setup
  2. Intro and Consent
  3. Warm-up Questions
  4. Task Execution
  5. Observation and Note-Taking
  6. Debrief and Post-Test Interview
  7. Data Storage and Backup

🔹 Setting the Stage

🧑💻 Environment Setup

For in-person:

  • Quiet room with minimal distractions
  • Dual monitors (if needed) for screen observation
  • Camera and microphone positioned properly

For remote:

  • Stable internet connection
  • Screen share and session recording tools (Zoom, Lookback, Maze)
  • Contingency plan in case of tech failure

🛠 Tools to Have Ready

Tool Type

Recommended Tools

Screen sharing

Zoom, Microsoft Teams, Google Meet

Session recording

Lookback, OBS, Maze

Note-taking

Notion, Airtable, Miro, Google Docs

Timer

UXtweak, Maze, stopwatch app


🔹 Moderator Roles and Responsibilities

A usability session typically includes:

  • Moderator: Conducts and guides the session
  • Note-taker (optional): Captures data unobtrusively
  • Observer(s): Stakeholders watching silently or remotely

Moderator Best Practices

  • Remain neutral: Avoid praise or correction
  • Use open-ended prompts: “What are you thinking here?”
  • Avoid yes/no questions
  • Don't intervene unless absolutely necessary
  • Keep users comfortable and reassured

🔹 Facilitating the Test

1. Greeting and Rapport Building

Start with friendly conversation to put participants at ease. Confirm tech setup (sound, video, screen share).

Example Script:

"Thanks for joining today. We'll be walking through some tasks—this isn't a test of your abilities, but of our design."


2. Informed Consent

Explain recording, confidentiality, and participant rights. Get verbal or written agreement before proceeding.


3. Warm-Up Questions

Ask basic demographic or behavior-related questions:

  • How often do you use similar apps?
  • What’s your biggest challenge with [domain]?
  • Can you walk me through your typical process?

These insights provide context for their behavior during the session.


4. Delivering Tasks

Present one task at a time. Example:

“Imagine you're a freelancer looking to track your income—try finding the dashboard where you can view monthly earnings.”

Tips:

  • Pause after delivering the task
  • Let users speak freely
  • Avoid guiding the user toward the “correct” answer

5. Encouraging Think-Aloud Protocol

The "think aloud" method gives insight into user cognition:

  • Ask them to verbalize thoughts, doubts, expectations
  • Gently remind them if they go silent

Prompt Examples:

  • “What are you looking for right now?”
  • “What do you think this button will do?”

6. Managing Difficult Situations

Challenge

How to Handle

User is silent or nervous

Reassure and encourage without pressure

User asks for help

Respond with “What would you do if I weren’t here?”

Tech failure (remote session)

Pause, troubleshoot or reschedule as needed

User is off-topic or chatting

Politely redirect focus to the current task


🔹 Observing and Collecting Data

🔍 What to Observe

  • Task success/failure
  • User hesitation or backtracking
  • Comments or confusion
  • Facial expressions or tone of voice (in-person/recorded)
  • Unexpected paths

📝 Note-Taking Techniques

Field

Example Notes

Task #1 Outcome

Completed but clicked 4 wrong items first

Error or confusion

“Not sure what 'billing email' means”

Suggested improvement

Wants a tooltip for icon label

Quote

“I didn’t expect this to open a new tab”


🔹 Recording and Data Handling

Ensure all sessions are recorded (if permitted). Store securely and organize using:

  • Video files (named by participant/test type)
  • Time-stamped notes
  • Task success/failure logs
  • Observations in spreadsheet or Airtable

Suggested Log Table:

Task ID

Participant

Time Taken

Success

Notes

T-001

P-03

0:42

Clicked wrong button first, confused


🔹 Debriefing the Participant

After all tasks, conduct a short interview. Example questions:

  • What did you find most difficult today?
  • Was anything unexpected or confusing?
  • How did the interface make you feel?
  • What suggestions do you have for improvement?

Give users a chance to speak freely at the end. Thank them and share any promised incentive.


🔹 Synthesizing Session Results

After the test:

  • Review recordings and notes
  • Tag issues by severity and frequency
  • Categorize into navigation, UI, content, or technical

Use frameworks like Rainbow Sheets or Affinity Mapping to cluster insights.


🔹 Common Moderation Mistakes

Mistake

Impact

Talking too much

Inhibits participant expression

Leading questions

Influences responses, skews data

Overexplaining tasks

Eliminates natural problem-solving behavior

Failing to ask follow-ups

Misses deeper insights

Not recording

Risk of losing data or misinterpretation


🔹 Checklist: Running a Test Session

Step

Details

Participant joins session

Verify connection, sound, screen sharing

Consent confirmed

Get verbal or written approval to record

Warm-up questions

Ask 2–3 to set context

Tasks presented

Deliver clearly and neutrally

Think-aloud encouraged

Remind participant gently throughout

Observation ongoing

Note reactions, issues, hesitations

Debrief conducted

Gather qualitative feedback and insights

Session wrapped

Thank participant and document findings


🔹 Summary

Running a usability test is both an art and a science. It requires neutrality, focus, empathy, and discipline to observe without interfering.

By sticking to a structured process—welcoming, guiding, observing, recording, and debriefing—you ensure your team gets honest, valuable feedback directly from users. The insights gained here become the foundation for iteration, innovation, and impactful design.


In the next chapter, we’ll explore how to analyze test results and translate raw user behavior into clear, prioritized action steps for your design and development team.

Back

FAQs


1. What is usability testing in UX design?

Usability testing is a user research method where real users are observed as they attempt to complete tasks on a product to evaluate its ease of use, functionality, and overall user experience.

2. How many users are needed for a usability test?

According to usability expert Jakob Nielsen, testing with 5 users typically reveals about 80% of usability issues, making it a practical number for early testing.

3. What is the difference between moderated and unmoderated usability testing?

Moderated testing involves a facilitator guiding the participant, often in real-time, while unmoderated testing is conducted without direct oversight, usually through automated tools or platforms.

4. When should usability testing be conducted in the design process?

Usability testing should be conducted at multiple stages—during early wireframes, prototype development, before launch, and even post-launch to ensure continuous improvement.

5. What tools are commonly used for usability testing?

Tools like UserTesting, Maze, Lookback, Optimal Workshop, and Hotjar are commonly used to run usability tests, gather recordings, and analyze user behavior.

6. What are some key metrics in usability testing?

Important usability metrics include task success rate, time on task, error rate, satisfaction score, and qualitative feedback from users.

7. What should be included in a usability test plan?

A usability test plan typically includes the objective, target audience, task scenarios, success criteria, tools used, facilitator script, and post-test debrief questions.

8. How do you recruit users for usability testing?

Users can be recruited via email lists, testing platforms, social media, or customer databases, and they should represent the target demographic of the product.

9. Can usability testing be done remotely?

Yes, remote usability testing is increasingly popular and effective, allowing researchers to gather insights from users across various locations using tools like Zoom, Maze, or UserZoom.

10. What’s the next step after collecting usability test data?

After testing, synthesize your findings, prioritize issues by severity, share insights with the team, and implement design improvements based on the feedback.