Globe with pictures of people, global communication network

Microsoft XC Research

A guide to conducting experience reviews

Share this page

By Denise Carlevato (opens in new tab) and Marcella Silva (opens in new tab)

User researchers have a variety of tools at their disposal, and one of the most flexible of those is the experience review (ER)We’d like to share the process, learnings and best practices gleaned as we’ve worked on ERs for teams across Microsoft.  

What is an experience review?  

An ER is a walk-through of an experience, focusing on a set of core tasks and sub-tasks. They are also known as Build Reviews, Expert Reviews, and Walkthroughs. 

ERs should be approached purposefully with a mandate to improve the experience. The beauty of the methodology is that it is flexible enough to be done at various stages of a project; for example, checking craft-and-polish prior to shipping or reviewing an already-shipped product. 

Why do an experience review? 

Bringing the different disciplines and their expertise together results in a very thorough examination of the experienceIt’s important to set rules of engagement in advance to make sure the meeting is inclusive and provides opportunities for all voices to be heard, even if they’re not in agreement We are also focused on building consensus around the key issues and their severity, ensuring that products will provide the best experience for our customers. 

What are some process best practices? 

While each team approaches ERs differently, below are some basic steps that many of our teams follow: 

  1. Pre-Review: Prior to the ER, we hold a scoping meeting that determines the focus of the reviewThis meeting includes program managementresearch, and design to discuss the review’s scenarios and timing. A list of scenarios and tasks must be brought to that meeting, as well as a list of assumptions, including who the customer is, what the customer already knowsand what they already use.
  2. Readiness Review: Herethe team determines whether the ER can take place. Some walking through tasks will occur so the team can make some fixes before the ERAll disciplines should agree that they are ready to conduct the ER before it happens. When the experience has already shipped, it’s a little bit different. The team will discuss what shipped and what was the plan of record. They will reach agreement on readiness, scenarios, and assumptions.
  3. ER: Attendees are the same people who attended the pre-reviewplus supportability, content writing, engineering, marketing, and people who aren’t familiar with the experiences, including customers if desired. Having diverse attendees with different types of expertise allows us to become aware of a range of concerns. The team will identify design problems that degrade the experience. Each task or subtask will be individually rated by the attendees, noting whether the issue is of a blocking or non-blocking nature
    NOTE: Two roles should be identified ahead of time:  1) The notetaker should either be the researcher on the experience or a researcher familiar with ERs. Sometimes, it’s necessary to have two notetakers.  Recording is also an option. 2) The moderator keeps the process moving and encourages attendees to take solution conversations offline. They are also responsible for ensuring that the conversation stays unbiased and the quality bar remains high. 

4. ReportingAfter completion, the researcher produces a report of the task ratings as well as issues discovered per task.  It is also good practice to include recommendations for fixes.  The product team will then get together and make decisions and fixes to prepare for Ship Room.

Sample task list ratings for an ER report

5. Ship Room: Completion of an ER is considered to be part of shipping requirementsIf multiple tasks are marked as yellow  or any tasks are marked red in the ER, the product or experience will not be allowed to ship until those issues are resolved.  

Some things to consider:  

  • Ratings: Assign an overall color rating per task based on the proportion of individual ratings in the room. Every voice in the room counts and everyone is accountable for their rating 
    • Proud / not proud: Everyone rates on the same scale, keeping in mind the following: Is it a good user experience? Is it usable? Is it a quality experience? Are we proud? If you are proud of how the feature or task is working, mark it green. Yellow needs improvements If you are not proud of it, mark it as red. 
  • App dependencies: We can also use the scoring and indication of pain points as a tool for communicating with another product team and showing how dependencies on their application are adversely affecting our product. It’s a good idea to start engaging with integration/dependency teams from the start of the design cycle, invite them to wall and design reviews, include them in study planning, and ultimately invite them to participate in the ERs.
  • Celebrate successesIt’s important to celebrate successes when the ER score improves and to demonstrate how improvements result from specific efforts around craft earlier in the design cycle.  The teams that are the most successful are those that do not skip steps in the process, are inclusive of all disciplines, care deeply about crafts and customers, and come prepared having done a pre-ER. 
  • Optimize reporting for stakeholder buy-in: Some of our teams put the report findings in a video, showing screenshots and ratings. This provides stakeholders with a list of issues shown in conjunction with the user interface in an easy-to-follow deck. 

Over the past few years, focusing on craft and using ERhas resulted in improved benchmark scores for our products. We have also seen a significant increase in our success rate and Systems Usability Scale (SUS) scores year over year in areas where craft has been prioritized. We hope that this overview helps you to evaluate the value this process could offer your team. 

What do you think? How has your team used experience reviews? Are there pointers and best practices you would add to what we outlined in this article? Will these ideas help enhance your next experience review? Tweet us your thoughts @MicrosoftRI (opens in new tab) or follow us on Facebook (opens in new tab) and join the conversation.

Denise Carlevato is a trained anthropologist and human-computer interaction expert specializing in customer and product design research, driving delightful user experiences for consumers and business professionals for more than 20 years. Denise has a great deal of passion for analyzing customer feedback, identifying trends, troubleshooting problems with the team, and then closing the loop with the customer. She collaborates closely with multi-discipline roles to define product opportunities, generate and refine design ideas, and analyze post-release data to understand and predict. With a proven record of delivering customer insights that elucidate human behavior as people move between products, devices, and platforms, Denise uses her talents to build innovative products that empower people to build, create, and invent solutions that have lasting value. Denise empowers all product groups to use a diverse set of customer data, and she documents and consult on best practices for optimizing the use of customer data. 

Marcella Silva has nearly 25 years of experience in Design Research, managing highly efficient and impactful teams.  Some highlights include developing research strategies to align with and impact product development, developing broad methods and tools to ensure high-quality user experiences, and being inclusive of disciplines and customers in her research programs.  She has created multiple customer engagement programs to ensure customer driven empathy as well as inspiration for innovation of complex products. 

She has managed the OneDrive and SharePoint Design Research team since Jan of 2015, a multi-disciplinary organization that is profoundly driven to understand customers’ needs, scenarios, habits, behaviors, gaps, pain points, and ultimately opportunities. In her role, she connects the organization to customers and facilitates the learning that results in deep empathy, ensuring that our solutions address real needs with high quality.