A website redesign is one of the most significant investments a business can make in its digital presence. Yet too many teams approach it as a purely visual exercise, swapping colors and typefaces while ignoring the people who actually use the site. At Kosmoweb, we learned early on that the difference between a redesign that performs and one that flops almost always comes down to whether real users were involved in the process.
Why User Testing Matters
Assumptions are the quiet killers of good design. Every designer and developer carries mental models about how people interact with websites, and those models are often wrong. We once redesigned a travel agency's booking flow based entirely on internal feedback. The team was confident the new layout was cleaner and more intuitive. When we finally put it in front of actual customers, we discovered that the majority of users couldn't find the date picker because it blended into the background of a hero image the team loved.
User testing strips away guesswork. It reveals friction points that analytics alone cannot explain. A high bounce rate tells you something is wrong; watching a user squint at your navigation menu tells you exactly what. Testing also builds alignment across stakeholders. When a marketing director watches a customer struggle with a layout they championed, the conversation shifts from opinion to evidence.
Getting Started with User Testing
You do not need a lab, a two-way mirror, or a six-figure budget. Start by defining what you want to learn. Are you testing whether users can complete a specific task, like submitting a contact form? Or are you exploring broader questions about how people perceive your brand? The scope of your questions determines the format of your tests.
Recruit participants who reflect your actual audience. If your site serves mid-level procurement managers at manufacturing firms, testing with your coworkers will not yield useful data. We typically recruit five to eight participants per round. Research by the Nielsen Norman Group has shown that this range uncovers most major usability issues without excessive cost. Tools like UserTesting, Maze, or even a simple video call can facilitate remote sessions if in-person testing is not practical.
Prepare a script with clear tasks but leave room for organic exploration. If every second is choreographed, you will miss the unexpected behaviors that reveal the deepest insights.
Conducting Tests: Our Approach
At Kosmoweb, we structure each session around three to five core tasks that map to key user journeys. We ask participants to think aloud as they navigate, narrating their expectations and reactions in real time. The facilitator stays neutral, resisting the urge to guide or explain. Silence can feel uncomfortable, but it is where the most honest feedback lives.
We record every session with the participant's consent, capturing both the screen and their facial expressions. Post-session, we tag moments of confusion, delight, hesitation, and error. These tags feed into an affinity map that groups issues by theme rather than individual opinion. A single user's frustration might be an outlier; four users stumbling at the same step is a pattern that demands action.
Real Examples from Our Team
During a redesign for a Prague-based language school, we tested a prototype of the course catalog page. Three out of five participants ignored the filter sidebar entirely and instead used the browser's built-in search function to find specific courses. The sidebar was well-designed in isolation, but users had already formed a habit of using Ctrl+F on content-heavy pages. We responded by adding an inline search bar at the top of the catalog, which reduced the average time to find a course from over ninety seconds to under twenty.
In another project for a SaaS dashboard, testing revealed that users consistently looked for account settings in the top-right avatar icon, even though our design placed them in a left-hand sidebar. Rather than trying to retrain user behavior, we moved the settings link to match the mental model that already existed. Adoption of the settings page increased measurably in the first week after launch.
What to Do with the Feedback
Raw feedback is not a to-do list. It requires interpretation and prioritization. We categorize findings into three tiers: critical issues that block task completion, moderate issues that cause frustration but allow users to proceed, and minor issues that affect polish but not function. Critical issues get fixed before the next testing round. Moderate issues enter the design backlog. Minor issues are addressed during the final QA phase.
Share findings with the entire project team, not just the designers. When developers understand why a change is being made, they build it with greater care. When stakeholders see the evidence firsthand, they are more willing to let go of pet features that tested poorly. We compile a short highlight reel from each testing round, usually five to seven minutes, that captures the most telling moments. This artifact often becomes the most persuasive document in the entire project.
Best Practices to Follow
Test early and test often. A rough wireframe tested in week two is more valuable than a polished prototype tested in week twelve. Keep sessions short, ideally under forty-five minutes, to maintain participant focus. Compensate participants fairly for their time; it signals respect and attracts more engaged testers.
Avoid leading questions. Instead of asking "Was that easy?" ask "How would you describe that experience?" Document everything, even the things that seem obvious in the moment, because details fade quickly. Finally, treat user testing as an ongoing practice, not a one-time event. The best websites are shaped by continuous feedback long after the redesign has launched.