Open nelsonic opened 7 years ago
Related;
@Cleop can you help us write this readme by UX testing our Time/Task Tracking App? π
This is probably the most important/useful skill any "creative technologist" has. πͺ
Well conducted user experience testing helps you ensure that what you are planning to build is going to solve the problem for your users that you set out to solve. β
It prevents you from wasting time β³/ effort π/ money π°on guessing π€βwhat your users will want, need or understand by getting them to critique your product as you go rather than retrospectively.
Testing usability and following usability guidelines is not something we have to do to restrict ourselves or because we're told to. Understanding usability is about ensuring users can use our product effectively and so without it our product will be less successful. ππ
User experience testing is a means of collecting feedback π£from users on the evolution of your product.
It can be performed multiple times in a cycle of: π£βπgathering feedback on your designs from users -> adapting your designs in light of user feedback π€ βοΈ-> getting feedback on your amended designs to see if they now meet the users' needs π£β π. Once you are satisfied the users needs have been met then you can begin to build the agreed designs. π π·ββοΈ π·
There are many approaches to user experience testing that you may choose to use based on your product, stage of product development, budget π΅, client base π₯ etc.:
The first stage of user testing you may wish to conduct is known as the discovery phase, for information on this stage see: https://www.gov.uk/service-manual/agile-delivery/how-the-discovery-phase-works
To perform usability user testing you may use one/some of the following methods:
Ensure that when you conduct tests you are testing them across devices so that your data reflects all of your users experiences e.g. mobile π±, tablet and desktop π» .
1. Writing a script π A script is a helpful way of ensuring interviews are delivered in a consistent way (if different team members conduct them) and that the interviewer can feel relaxed βΊοΈ because they know what they've got to say. It's important that scripts aren't read like an autocue π€ as you want the tester to feel relaxed too, so making them more like a normal conversation is beneficial. However if you are anxious π° about missing something out, one way to deal with this is to tell the participant 'I'm going to read through this part of the process to make sure I don't leave anything out'. By telling them what you are doing and why it can help alleviate worries they may have constructed otherwise, you're telling them that you're reading it so they benefit from all of the information, not because this is a formal setting and you're avoiding their eye contact ππ! So make sure the team familiarises themselves with the script before they start. The script doesn't need to be followed word for word but team members should bare in mind that certain word choices are important in order to not influence the testers responses. Here is a useful script outline that might be of use for some projects: https://docs.google.com/forms/d/e/1FAIpQLSfM2Uaje8gpBde-RqU01DlxrGUEsWZrjiy8yTKmYaeJYAZUuw/viewform
These are some of the key points from it and other resources (see list at the bottom of readme):
For a homepage exploration you might prompt them with these questions before showing them the page: 'Look at this page and tell me what you about it...' 'Whose site do you think it is?' 'What strikes you about it? What you think you could do here? What you think this site is for?'
For testing a specific flow or user journey tell them about the task you want them to try and complete. You can read out the task aloud and give the participant a written copy for their reference. The task should give the user a scenario to follow, that might include any relevant background details about them, why they are on your site, how they came to know about the site and what they are looking to do on the site.
Reiterate that you want the participant to think out loud as they go along. ππ€π¬
Ask them about other applications they might use that relate to your product. Consider carefully whether you do this before or after showing them your own application. The order in which you do things will prime the tester to have whatever you have just discussed on their mind, consider whether this is helpful in the context of your research or not.
Wrap up and ensure to ask your tester if theyβd like to ask you anything. Itβs not only polite but also sometimes reveals subjects that the test failed to capture. βI think thatβs everything, do you have any questions for me?β
Be comfortable with silence πΆπ, when someone is exploring the application don't fill the awkward silences by telling them what to do next, wait for them to talk. Responses like 'I'm not sure what to do now, where does this page go?' are really useful because they show you that your designs are not self explanatory.
Avoid using all yes/no answer, open ended questions encourage people to give more detail and depth of analysis
Be mindful of the language you use to respond to the tester, if a tester is talking about whether they like the app or not, saying 'ok' in confirmation rather than 'good' is more neutral. Saying 'good' would suggest that you are pleased with their response and may influence them to give other answers they think you would like to hear.
2. Who π₯ If you have existing users for your product reach out to them and ask if any of them would like to participate in some research to help improve the product. If you don't have any existing users yet, aim to test your application on people who represent your target user group. Even when you do have an existing user base, testing with people who are new to your product can offer a different perspective to those who already know it and what it does. If you are struggling to find participants encouraging people by informing them of how long it takes to participate (10mins is reasonable) or you can offer something to thank them for participating e.g. buy them a coffee βοΈ if you are in a coffee shop. Just be mindful that you don't want what you offer as a thank you to impact what responses they give. You may get a better response rate for participation from people who are on their own π€.
3. Where πΊ Key for determining where to conduct your interviews are:
4. When π₯π When is a time that suits you and your interviewees. If you're looking to source interviewees on the day when is a time that your chosen location is likely to have plenty of people around e.g. office hours for an office, would people be happier to talk on their lunch break or are they likely to leave the office altogether? Does the day of the week or time of year make a difference ππβοΈ? Are there any key events within your company or in the public eye that would be good to coordinate with to conduct your research πππ? E.g. conducting research in January to coordinate with 'Dry January' when researching for a product aimed at people interested in low/no alcohol drinks options.
5. Who will conduct interviews 1 or 2 people is sufficient (you don't want to make your tester feel uncomfortable). Everyone involved in your product should go at some point (developers too!)
6. How many interviews Up to 85% of core usability problems can be found by observing just 5 people using your application according to Jakob Nielsen (see https://www.youtube.com/watch?v=0YL0xoSmyZI and https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/). The reason for this is that the core usability problems are easy to spot for people who are new to your application (and difficult for you to spot as you are too close to the product to see how real people perceive it for the first time). The 'magic 5' approach suggests that you find out the most from the first person you to speak to and then a little less from the next person and so forth.
7. Recording the findings of your interviews π and interpreting them πβ Agree before you conduct your interview who is going to take notes and in what format or whether you're going to record the session (the screen or audio). Think about how you will later collate your results to see patterns from them. Once you have all of your findings discuss them with your team to find trends and see how you can make improvements based upon them.
Testing with small sample sizes like 4 or 5 people is worthwhile when often the alternative is no testing at all. However, you shouldn't rely on statistics from such small groups. So if 1/4 people take issue with something don't consider it 25% and therefore something that must be changed. Consider what the issue for that 1 person was and interpret it with your knowledge to deduce if you should change the designs based on their response. Remember that small and simple amends are just as good, if not better than a total redesign. Just because you have areas to improve on doesn't mean you should start again from scratch. Also remember it's better to make your existing product work rather than adding new feature after new feature when the original product isn't working well for users yet. If you have made changes to the existing product and people are reacting badly to them remember that aversion to change is normal. Give it a bit of time to allow users to adjust to the new designs or leave assistance mechanisms to point people in the right direction to teach them the new way of doing things.
Be conscious of bias - think about how every aspect of your set up (time/location, interviewee demographic, script etc.) could influence the responses you gained in conducting your research. Could interviewing loved ones about your idea expose only the positives of your idea because they don't want to give you negative feedback for fear of hurting you?
Resources:
@nelsonic - the title of this repo is 'learn user experience testing' - does that mean it should focus exclusively on user experience/ usability testing as opposed to discovery research?
In case those words are all too subjective, another way of putting it is: should this repo just be for discovering what users think of a set of wireframes/ a product proposal or should it also include the stage before that of more broad user research which would encompass understanding more about the problem area (rather than diving straight in analysing some designs)? Should it also include testing with users throughout the lifecycle of your project as you iterate and build new features?
@Cleop our objective with creating these "learn-xyz" repositories is to condense the "essential" knowledge into something a person can read in less than an hour. A classic example of this is: https://github.com/dwyl/learn-json-web-tokens
Just approach the problem from the perspective of trying to learn what you can about the topic and document-as-you-go so that anyone else
can learn in half (or 10% the time) it takes you.
If you feel that Customer Development / Discovery research should be the starting point, go for it.
If you want to focus on testing an idea that where there is already an established target customer who is demanding a solution to a challenge/problem, start there.
If a person reading this
"learn-..." asks a question like:
"How do you UX test an idea without writing any code?"
.then
we can answer that question with a few paragraphs and/or examples.
I really think that learning Wireframing should be focussed on first: https://github.com/dwyl/learn-wireframing/issues/1 And then once that is done return to this one and "test" the wireframes created.
Do usability guidelines constrain design?
There are two ways in which conventions can exist:
We can compare usability to the fact that good design conventions are applicable in things we interact with everywhere in the world, not just the digital world. E.g. arrows or signs telling people things that are ahead is a convention everywhere in the world such as signposting in an airport:
There are some digital-specific learnt UX behaviours e.g to ignore ad banners and blinking things that look like ads on the side of pages where ads commonly appear.
Whilst in some ways we can consider usability present in everything we do (digital and non digital) and not device specific, we should remember that some nuances are device specific. So ensure you test on all relevant devices e.g. mobile, tablet and desktop because certain devices have specific design conventions to suit screen size or user actions such as swiping.
You can do tests with small numbers of users (like 4 or 5) to detect obvious problems. This is better than no testing at all and a good starting point. However, you shouldn't rely on statistics from such small groups. So if 1/4 people take issue with something don't consider it 25% and therefore something that must be changed. Consider what the issue for that 1 person was and interpret it with your knowledge to deduce if you should change the designs based on their response.
Lots of companies would rather put out a new feature rather than making the existing technology work. Why is that it's still hard to get a projector set up for presenting a slideshow or why are printers so hard to use?
@Cleop please transfer these notes into a markdown file as it's "lost" in an issue comment. thanks. β¨
Why?
This is probably the most important/useful skill any "creative technologist" has. Knowing how to speak to people ("end users") and get their feedback in a targeted and un-biased way is essential for ensuring that what you build is "the right thing".
What?
Speaking to people.
Found this video while googling: https://youtu.be/vQPSbO_3Ir0
Also:
Understanding & discussing their needs "as a user..." If you already have a prototype or a real product, share it with them and record their feedback!!
How?