Nielson Norman Group’s latest blog post, “Selecting an Online Tool for Unmoderated Remote User Testing”, outlines unmoderated remote user testing and a selection of tools that make it available. It covers recording user audio and video, recruiting issues, task writing, and results timing.
What would happen if we did no user research at all? Let’s consider how it would impact most teams making sites and apps.
1. We’d have to rely on other means of making important decisions.
Without user research data, we have to rely on our analytics reporting (if its setup) and we have to rely on support emails directly from users. Both of these things are valuable, but for many design choices we’ll be left extrapolating from a small and biased set of data.
2. We’d have to rely on other types of feedback.
Support tickets, comments on social media and blogs, and marketing surveys all present a much more biased and less in-depth means of collecting information about our users. We’d end up skewing our decisions towards a small, vocal group instead of trying to focus on the other 98% of our users.
3. We’d spend more time iterating on features.
Because we know less about our users goals, expectations, resources, motivations, and attitudes, we have less of an idea of how to build and design the features in such a way that meets the user’s needs. Less time now, more time later. And also, there will be much more frustration and disagreement on direction later as well.
4. We’d spend more time on features we later find we didn’t need.
Not even just more iteration, sometimes a stakeholder will find a blog post or talk on a particular topic, and with no research, it’s the greatest idea ever and will totally work right? Except the last time we did that. And the time before that. With no research, we have no idea if the idea has the potential to really pan out for our users.
5. We’d spend more energy on getting approval and consensus.
Instead of talking about real users and what they did in the studies, we are now talking about hypothetical users and what one person’s aunt or uncle did on the site. Because the team doesn’t have a central base of information to make decisions with, now its anyone’s game to speculate. And decisions will usually go down with whoever is the most assertive or powerful in the group.
6. We’d spend more time second-guessing ourselves.
Without data, we have more loops, both in our heads and verbally, guessing whether or not a particular feature will work or gain traction. We question every detail and the overall strategy as well. These are healthy things to ask, but unless you get data, you’ll keep asking yourself with no answers.
7. We’d lose our focus on our users.
Because we’re no longer focused on users directly, and we now spend more time talking about stakeholder approval and what our users hypothetically want, now we focus on just getting things across the finish line instead of having the real conversations about what’s best for the organization long-term.
8. We’d rely more on marketing.
Because our features take longer to develop and get right, our organization will need more help from marketing in terms of research, training, and promotion to get users to use the product.
9. We’d overreact to direct user feedback.
Every support ticket, because it’s the best source of direct user feedback now without user research, becomes the voice of a thousand. We start orienting our product to our noisiest users, rather than the everyday needs of our most common groups of users. This is a loss us and a loss for them. And let’s get real, our noisiest users will never be satisfied, no matter how much we try.
As we can see pretty clearly, not doing user research has some pretty drastic consequences for our organization. Sadly, we don’t really have to tell you any of this. If you are reading this blog post, then you probably already know.
Even if it’s just a usability test every now and again, a quick survey or user interview once a month, its more data than doing nothing. There’s a balance to everything of course, and businesses face enormous pressure to do a million things, but making a little time for doing user research can make a big difference in the long-term.
Questions, comments? Let us know!
User experience design is a wide field. There’s interaction design, interface design, information design, graphic design, copywriting, information architecture, usability… the list goes on.
Many people outside of design confuse user experience and interface design to be the same thing. But they are distinct.
The user experience is very broad and can include a variety of things. However, in most organizations, user experience designers are often a step removed from graphic design.
In practice, user experience design focuses on the earlier stages of design work. User experience professionals focus on strategy, planning, organization, architecture, and of course user needs. User experience designers work on flows, maps, and wireframes. Additionally, user experience professionals focus on usability testing and user research.
Interface designers on the other hand tend to focus on wireframing, but also often graphic design and working through the process with developers.
Sometimes, user experience designers also do interface design. Sometimes, a person with the job title of interface designer often carry out user experience tasks. The two roles are distinct, but overlap.
Ultimately, however, is what works for your organization. But don’t miss user research, strategy, information architecture, and wireframing! Too many organizations focus on interface first, which leads to all sorts of lost opportunities.
Who does it, and what their title is, is less important than carrying out the function itself.
Questions, comments? Let us know!
Often we group user research and user testing into the same general block. But in reality, these are two different things, even though they share methods.
User research focuses on getting information for the sake of getting information, often without a specific agenda in mind. In research mode, you are gathering information without necessarily having a specific object or hypothesis.
User testing focuses on taking an existing concept, being a fully functional project or just a sketch, and seeing how it does. In testing, you start with a hypothesis, try it, and analyze the results to test the validity of the hypothesis.
We often use these words interchangeably, even on this blog. But its important to note the distinction.
Many organizations, if they are doing any sort of user research or user testing, will focus much more so on user testing. Perhaps that makes sense. Businesses are trying to reach goals.
But you may find more preliminary, agenda-less research to be just as rewarding, if not more so. The more research you can collect earlier on, the more time and effort you will save later on.
There’s a balance to all things. Too much user research and testing can get in the way of building a project. Too little can make a project wander aimlessly. Too much testing near the end and not enough at the beginning can miss big picture opportunities. Too much research upfront and not enough near the end can cause projects to stall, and be rushed instead of polished.
Thoughts? Let us know!
Not every published user research study produces practical suggestions and tools. Many don’t even go into depth about why they end up at particular findings. Some publish study data without doing much interpretation of results.
With the Baymard Institute, you’ll get a practical, hands on, and well explained heuristic of the study results. While the method they use is often a bit limited and lacks quantitative support, it is incredibly detailed and useful.
Each study starts with a brief explanation of their method. Then they go through major finding by major finding, describing how severe and frequent the issue was. They then explain the issue and prove it with several of their participants. Finally, they show you what to avoid and how to improve upon it. By using real world sites like Apple or Nordstrom, you find that it isn’t just the small shops facing usability issues. They give several case studies that go in-depth on a particular site or app, demonstrating the usability strengths and weaknesses based on their findings. Finally, they end with a heuristic checklist that you can use to check your own site or app with.
At ConceptCodify, we still strongly recommend conducting your own usability tests and other forms of user research before starting to make changes, but the Baymard Institute publications is useful as a place to start the conversation about the usability of your product.
Right now, Baymard offers three reports: Checkout Processes, Mobile Devices, and Homepage and Category. To get a sense of the content, they offer samples and their blog posts are of similar style to their reports.
Checkout the Baymard Institute reports at http://baymard.com/
What’s your company’s design process?
Maybe a stakeholder says, I want this, or we need that. Maybe there’s some market research or looking at analytics; probably not. The designers then jump into Photoshop, making what the stakeholder asks for page by page, comping each page in full each time. The stakeholders have a few random comments, but generally approve it since its exactly what they asked for. Then its handed to developers as a complete package with little explanation, where they sort of make it look like the comps but there’s many unresolved questions about the functionality or reusability. And most of the time, too many problems arise in fleshing out requirements and getting it through testing and it never launches, or it launches months later than it should and no one’s entirely happy with it.
User experience research and using a more lean, agile model can ease these problems. Making something look good is important, but making something that helps users meet their goals and is developed and iterated on quickly is even more important.
Maybe you can’t get user experience research buy-in. Usability testing, why would we need that? There’s another way to stop the pain, and tends to be easier to get people to bite.
Stop making full-page comps.
Does this seem counterintuitive to you? Stakeholders often want to “see” the fully developed product before “handing” it off to development, because development resources are highly limited and expensive. But it costs more to do it this way in the end and is very frustrating for everyone involved. Just outline the problems this system is causing, if you are using this system I’m sure you’re already well aware of the issues it causes. Maybe 90% of people are doing full-comps still, but does that really justify the practice?
Instead, try this. Comp up a single library of reusable components and modules. Shocking! The developers will almost immediately jump onboard. Why? Because making things consistent is much easier to program.
You probably already have an existing site or project. How do you transition? First, go through the project and start just making a list of components that exist repeatedly. Need a starting place? Look at something like Twitter Bootstrap or the Mailchimp. Then, make a single styleguide and component library in Photoshop or your design tool of choice. (Or even HTML/CSS!) Then, as you add new features or change things on the site, just update the component library. For each new or updated feature, add new components, remove components no longer being used, change components and show changes that need to be made to existing to components.
But then how do the developers know how to build it, you ask? I mean, you are wireframing, drawing flow diagrams, and listing out acceptance criteria, right? If you aren’t, start getting into that practice. With wireframes, flow diagrams, the acceptance criteria, and the component library, the developers will know how to combine those things to produce the product.
But what if I need a special graphic for this situation? Okay, take a screenshot, or make a rough comp of the external elements, and build that graphic. This isn’t all or nothing, its about changing the main habit.
What if I need a change? Just add it to the component library after the ‘main’ component.
But what if a stakeholder insists on seeing a full comp? Fake it. Focus on making the component library the true source, and build out the full comp as quickly as possible. They probably don’t want to see a full comp for every page in reality, just the big pieces. So then just fake it for those parts. After all, they aren’t looking to see a demonstration of Photoshop skills. They are looking for a visual of what their idea would look like.
Cut down your own work down to a third. Cut development time down in half. Be more agile. Smooth communication. Deliver wireframes, diagrams, visual component libraries, acceptance criteria, and the occasional graphic or comp. And in the process, accidentally integrate some of the agile and user experience processes into your team’s workflow.
Questions, comments? Let us know!
One of the most common questions we get at ConceptCodify is, “how many cards should I put in a study?” We see studies of all types and sizes, and we can make some recommendations based on what we’ve seen so far.
The biggest mistake many researcher make is having studies with way too many cards. We’ve seen studies with over 300 cards before. Statistically, these studies get very few responses. Very few people are going to want to sit down and sort that much information. For a normal study, about a third of participants who start sorting cards will actually finish sorting the cards. For studies with over 100 cards, this number drops to being so small, it isn’t a measurable statistic.
Sometimes we also see studies with very few cards. Most of the time, this appears to be people who are just trying out the tool and seeing how the setup process works. But even a small study, of let’s say 10 cards, can give very valuable information. If you think about a product page for example, there’s really only about 10 or so major sections, but having the data about how to organize the page can dramatically increase sales.
We can estimate from what we’ve seen it takes participants about 10-20 seconds per card to sort a study. And there’s a real drop-off after a few minutes of starting a study. In general, it looks like you want participants to take three minutes or less to sort the cards. In other words, ideally, you want about 10-20 cards per study. You can extend to around 30 safely, but beyond that, you’ll have a hard time getting enough responses to make the data valuable. You can ease this a bit by using our feature to show a random subset to each participant, but you’ll still need to collect more responses to get meaningful data from the number of cards.
The biggest piece of advice I can give is, think about what if you were still conducting the card sort in person. What if you wrote each concept on an index card and handed it to someone. How large would the stack be? Would the participant run for it when looking at the size of the stack if they saw it? That’s probably a good sign that there’s a problem. Similarly, if someone came to you and said, here’s these 10 cards, how would you group them, you’d probably have no problem stopping for a minute to sort the cards. If it were 50 cards, you’d probably be less likely to be willing to commit.
Make the studies as small as you can while getting the data you need. Try to get the big picture concepts; don’t focus on getting every single detailed element. If you need to, you can start with a overview study, and then do later studies to flesh out the details. There’s no rule saying you have to get all the information in one study.
To be entirely forthcoming, ConceptCodify has difficulty calculating the result with studies with more than 150 cards. The calculation grows exponentially with the number of cards; after about 200 cards, the calculation time skyrockets. The number of participants impacts the performance linearly; the difference between 5 participants and 100 participants is very small. We’ve been working on making our algorithms more efficient, but after seeing the response rates on studies with large numbers of cards, we’re now of the belief our engineering time would be better spent elsewhere, such as new analysis options, filtering, and sharing.
We’ve noticed the hardest part of conducting studies is getting stakeholders on board and recruiting participants, based on the support tickets we get and conversations we have with researchers at events and around. We’d like to spend more time in those directions. We’re trying to think of ways to make card sorting a more social, collaborative process, as both of these issues are interpersonal. If you have thoughts or suggestions in this regard, we’d love to hear your ideas.
But to get back to the point of this article:
Design your study with the least number of cards you need to get valuable data.