I have worked on several projects which main goal was to improve user experience. After I have conducted research about old administrative tools, I found that users thought that layout was very cluttered with information and asked to make design more user-friendly. Based on their responses I created a new design of multiple internal administrative tools. Here is the workflow of my research and redesign process:
Survey was sent to all users to define personas and overall user satisfaction.
Each persona has his/her goals, tasks and characteristics based on the different job roles.
Interviews were conducted to define key agent pain points during the interaction between a user and a system.
Task Analysis of the interviews helped to create a step-by-step workflow of how the tasks were being accomplished. After the card sorting the information were analyzed and studied with similarity matrix and dendograms.
Open Card Sorting
This type of research is used to find out how to make navigation intuitive for users. We asked users to sort cards into groups that make sense to them. Those cards represented the content and functionality of the software. Users were also asked to give each group a name and even to create sub-groups if that’s appropriate.
“Paper” prototyping was done with the Balsamiq Wireframes. Then users were asked to perform simple tasks with those mock-ups and give a feedback about information architecture and layout. The results were summarized into the report.
Design polishing includes changes based on the feedback.
Utilizing my experience in HTML and CSS coding I created interactive prototype based on the final layouts. Then I tested the prototype with users. Users were asked to complete key tasks and provide their feedback on their interaction.
Remote Moderated Usability Testing
The Usability Testing were done remotely via Skype. Video were recorded for further analyses. During the usability testing user’s behaviour were captured via first impression, click analytics, task time, error rates, satisfaction questionnaire ratings, success rate and questionnaire. The captured metrics were used to compare current results with previous results on the old layout by task effectiveness, task success efficiency and satisfaction. Also during testing users were speaking aloud about what they were doing on the screen and what they were thinking about their experience. This quantitative data were used for the suggestion on how to improve design. In the report each task were described with behaviour metrics and users feedback.
Final design approval
After summarizing a feedback from the usability testing, final changes to the design were made. Design were sent to stakeholders for approval.
After the new design were implemented another round of usability testing were conducted to compare the old and new designs and to find if there were any room for improvement.