Why Test with Users?
A ‘user-friendly’ software interface is often a basic requirement from buyers and end users. As simple as it sounds, achieving this requires continuous iteration and refinement. Software designers don’t intend to create user unfriendly software, and yet, end users regularly encounter terrible user experiences. There are many reasons why this occurs, but more importantly, there are industry best practices to avoid this.
At the start of the DEVELOP project, time was invested into conducting user research. The aim of this research was to ascertain the unmet career development needs of employees. As the project progresses testing the software with end users is essential to ensure these needs are being met.
This activity is conducted in a formal manner in a process known as user testing. User testing is a series of test sessions in which test participants are asked to perform a number of tasks within a product. The outcome of this is quantitative and qualitative data. From this data, our team can gain insights about the product from a user perspective and adjust the product in a timely manner.
The first series of test sessions on DEVELOP were conducted in February 2017 with Intrasoft International. We tested a wireframe of our Personalised Learning Environment. A wireframe is essentially a visual guide that represents the page structure, as well as its hierarchy and key elements. We tested a wireframe because it is important to see what an end user thinks of the product while it is being created. It is easier to adjust the product at an early stage before it has been fully developed. Moreover, participants are often more likely to give more feedback with wireframes, as they are presented as not being fully defined. With polished products, end users are often too polite as they perceive them as being fixed and finalised.
Figure 1. Sample of a wireframe design
Fiona Mc Andrew, a user experience and interface designer from TCD ran these sessions with the assistance of Neil Peirce, the DEVELOP project coordinator from TCD. Six Intrasoft employees were invited to be participants.
Procedure – Thinking Aloud Testing
Figure 2. Think Aloud Testing
The testing ran as follows:
- Each test session took 50 mins.
- At the start of the test session, Fiona explained the purpose of the research at a high level. It was also an opportunity for the test participant to ask any questions or share any concerns.
- During the test session, the test participants were asked to conduct a number of tasks on the wireframe webpage. While doing the tasks, they were asked to think out loud, that is, to describe what they were trying to do, their motivations for doing so, how they are feeling, etc.
- A microphone captured the test participant's voice and screen capturing software captured their activities on screen.
- Additional notes on the choices and opinions of the users were recorded by Neil.
- An example scenario was: "Ok, it is two weeks’ time you decide to revisit the application. You have completed some of the assessments. Tell me what do you think you could do next? “
- Once the tasks were completed the researchers asked a series of questions relating to the software and the test participant’s career. The answers allowed us to better understand the context of the feedback.
All of the data from the testing was collated. An internal workshop was held in TCD where staff discussed and analysed the data from the testing and concluded the main findings from the test sessions.
Figure 3. Outputs of the user testing insights workshop
Through the use of stickers on the wireframe designs it was possible to identify clusters of positive and negative features of the wireframes. This is illustrated below on one of the wireframe pages.
Figure 4. Annotated wireframe page showing issues (red) and positives (green)
The insights from this initial series tests will be used to inform the development of the user interface. We plan on conducting further user testing with a more diverse user base to further inform the design.
Perfecting the user experience can only be achieved by continuously checking with the user to see if the software being designed matches their initial unmet need. What a user says they desire to do versus what they actually do varies vastly, having a tangible item to talk about enables them to provide feedback. Feedback enables us to spot user patterns of annoyance, confusion and delight. When we take this feedback onboard we can alter the product and aspire to achieve user-friendly software.
You are welcome to discuss this article on Medium.