Ratings & Reviews

Optimizing and simplifying the user feedback process to benefit both sides of the marketplace

Company
SpotHero
Timeline
Q1-Q2 2024
Platform
Web, iOS, Android
Role

Context

Overview

In the first quarter of 2024, I served as the sole designer on a project focused on optimizing the Ratings and Reviews experience. The opportunity area stemmed from the feedback loop effect that emerged from the Sort by Relevance special tagging that we implemented during our search experiments the previous quarter.

Because historical foundational resarch has always informed us that users take a facility's ratings into consideration (behind distance and price), we held the assumption that if operators raise their facility's average rating, users would be more likely to book at their facility. In order for operators to best understand how to improve their facility's rating, we needed more detailed user input. We knew that by enabling SpotHero drivers to provide more detailed and specific feedback about their experience at a certain parking facility, it would benefit the operators by providing them more precise actionable suggestions for areas of improvement.

Known Problems / Opportunity Areas

  • Up until now, there was no way for users on web to be able to rate or review their experience. The rate & review feature was only available on the native apps. Because over 65% of new users book on web, there was a huge gap in user-provided input from drivers new to SpotHero. As a result of this, we found out that web-only users didn't believe the ratings information provided on facility listings was real. We knew we needed to provide a solution for drivers to rate and review without having the app downloaded on their device.
  • Another major issue we wanted to address was that app users only had 1 opportunity to rate and review their experience. A ratings modal (pictured to the right) would appear the next time the user opens their app. If they dismiss the modal, they missed their chance to provide feedback, as there was no other way to access this flow. We knew a quick win would be to provide other routes of entry.
  • Finally, the feedback captured by these ratings and reviews is relatively arbitrary. Before the project, less than 1 percent of ratings included comments. Because of this, there was a lot of mystery as to the reason behind the rating, and operators would be unclear on what they needed to improve to address the issue.

Strategy

Business Opportunity

Improving our star ratings infrastructure and data will help us highlight & reward facilities that put effort towards a good customer experience, fostering feedback loops that drive up overall spot quality, increase user trust, and ultimately improve conversion & retention for both new & repeat drivers.

Product Management, Data Science and Finance estimate that hitting the OKRs set could result in an additional $2 million to $4 million incremental GOV per year.

OKRs

Objectives
  • Establish SpotHero as the #1 most trusted online parking resource for finding quality supply
  • Increase demand
Key Results
  • Increase quantity of facility star ratings (as a percentage of rentals) by 12% to >16%
  • Increase # of facility star ratings that include qualitative review data (as a percentage of rentals) from 2.5% to >10%

KPIs

  • First-time conversion/ new user acquisition
  • Short-term retention (User makes another conversion within 30 days of their first conversion)
  • Lifetime retention (parks per month)

Discovery

Generative Research

We knew we needed to introduce more entry points into the rating experience and build out the capability for users to rate on web, but next we needed to dive into the post-MVP ideal state of the ratings capturing experience. Before being able to design a solution around how to quickly capture specific feedback, it was essential to get a thorough understanding around the criteria that users rate and review based on, and the key elements of both a negative and a positive experience. I was the sole researcher, and I executed several research methods to provide adequate insights to inform a user-centric design.

Research methods utilized
  • Diving into and analyzing existing customer review data
  • Stakeholder interviews with supply team, operator league product team, and customer service director
  • Customer support monitoring
  • Survey of SpotHero users who left a review within the last 30 days
  • User story generation
  • Competitive analysis
Research goals: what I wanted to learn
  • What motivates a user to leave a rating?
  • What are the most common reasons behind a user's positive rating?
  • What are the most common reasons behind a user's negative rating?
  • How do operators try to figure out the reasoning behind negative reviews of their facility (if at all)?
  • How would users most like to be prompted to leave a rating (e.g. pop-up prompt, email, push notification)

Top Research Takeaways

By utilizing multiple generative research methods, we had enough qualitative and quantitative data to see distinct patterns emerge. I had a clear understanding of the most common reasons behind a higher star rating vs. a negative star rating. I now had the information to inform around of data-driven designs to test with users.

Top reasons users give positive ratings
Top reasons users give negative ratings
Smooth experience, no issues
Issue redeeming (QR code issue, gate not opening)
Friendly staff (valet drivers, gate attendants)
Poor service
Great value, saved money on drive-up
Unexpected extra charge or fee
Clear, easy-to-follow instructions
Hard to find facility
Safe & secured facility
Safety/security concern
Clean, well-maintained facility
Dirty or poorly maintained facility

How Might We + Inspiration

Competitive Analysis

In a brainstorming session with my product team, I asked everyone to collect examples of inspiration for potential solutions that we could leverage for this project from other sites and apps. We found that a lot of other consumer-facing products were implementing a pattern containing pre-filled prompts for users to select to add more context to their star rating, but the ones we referenced most often were Sweetgreen, Uber Eats and Instacart. With the data we gathered from customers' written reviews, customer support contacts and survey results, I knew that our lists of commonly reported experience highs & lows would be well suited for this type of design pattern.

MVP / Quick Wins

Overview

While I knew that the ideal state of ratings & reviews (post-MVP) would require quite a bit of research and discovery work, I devised a plan with my team's product manager, engineering manager and company leadership to knock out some quick wins to increase the amount of ratings we are able to collect across platforms. We had our engineers begin to add more entry points to the current rating experience, as well as build out a capability for web users to leave a rating.

Developments
  • Build and enable a push notification for app users to be sent 1 hour after their parking reservation ends.
  • Build and enable an email for web users to be sent 1 hour after their parking reservation ends; as well as a web page to enable web ratings.
  • Add a CTA under the user's Past Reservations screen for them to either leave a rating, or view a rating.
Rationale

By building out the capabilty to collect more ratings data across platforms before releasing the more robust quick-prompt iteration so that the data collected from the future iteration will be at a larger scale.

Visuals
Email entry points (web)
Push notifications (app)
CTAs on past reservatins (app)

Impact

Adding more entry points to the ratings experience and allowing users to rate on web added immediate and drastic increases in the amount of feedback users were providing.

  • The amount of ratings with written comments provided increased by 97%.
  • Amount of ratings per rentals increased from 12% to 19% (exceeding OKR before the ideal state is released).
  • Within a week of launch, the web email entry point accounted for about 70% of all ratings with comments.

Exploration

For the ideal state, post-MVP

As the engineers were building the MVP / Quick wins, I was designing and testing concepts for the ideal state.

Research methods utilized
  • Concept validation testing
  • Unmoderated usability testing
  • Content test
  • Survey
Objective

Design, test and validate an ideal state for the ratings & reviews experience, leveraging user-provided input through comprehensive research and testing.

Concept Validaton

In competitive analysis, I saw that several different digital products utilized a smiley-face emotions-based rating scale instead of a star rating scale. In addition to the competitive analysis, this concept was suggested frequently in meetings with other designers, product managers, and the data science team. Because it was becoming a topic of debate internally, I knew it was worth testing wth a group of users.

Using Maze, I put out a quick concept validation test to 30 people from Maze's participant panel. After giving context and showing full screens of both concepts, I asked users which they prefer. I was surprised at the results– the star concept (concept A) won by a huge landslide. I shared the findings internally and went forward with stars for the final design.

Concept A – 90%

Ratings represented as a scale of 1-5 stars
Preferred by 27/30 users surveyed

Concept B – 10%

Ratings represented an emotion emoji scale
Preferred by 3/30 users surveyed

The Next Iteration

Now that I had the customer data to inform the pre-filled prompts and validated a concept to move forward with, it was time to deign prototypes of the next iteration.

My goal was to have a minimalist design to keep up with design trends, and to add emojis to associate with each prompt, for quicker comprehension and a touch of modernity and delight.

Designs with Prompts

5-star rating prompts

"What did you love?"

4 or 3 star rating

"What could have been better?"

1 or 2 star rating

"What went wrong?"

Usability Testing

Unmoderated Testing with Maze

Using Maze, I conducted an unmoderated usability test using a Figma prototype of the new iteration of ratings & reviews. I sent the test to 30 people using Maze's panel. I gave 2 scenarios: a positive experience, and a negative experience, and asked the users to demonstrate how they would go about leaving a rating.

Takeaways from Testing

Fortunately, usability testing didn't indicate any problems with the design. Overall, the ease of use score turned out to be 4.8 out of 5.

When asked if the experience looked familiar to users, around 77 percent said that it did. In a free-form text question, I asked users who answered "yes" to provide the other company whose experience that it resembled. To our delight, the ones from which we gathered inspiration were among the most frequent answers.

Emoji Survey

Because there are so many options for emojis, and different people can interpret them in different ways, I wanted feedback– and there's strength in numbers.

Using SurveyMonkey, I put out a survey to 60 participants, asking them to vote for which emoji resonated most with them for each prompt.

Release + Impact

Release Schedule

The designs were polished and handed off for development in late April 2024, and went live on all platforms by early June 2024.

Measurable Impact

The impact of our solution has far surpassed our expectations. Because capturing qualitative data is so much quicker and easier with the pre-filled prompts, we have seen a tremendous increase in qualitative feedback accompanying star ratings. Our goal (OKR) was 10% of all ratings including qualitative feedback. As of August 2024, the amount of ratings that include qualitative feedback is 29%, exceeding our OKR by 190%. This number includes both kinds of qualitative feedback: users who chose pre-filled prompts, users who typed written comments in the comments field, or users who did both. With this massive increase in qualitative feedback, we are able to glean insights to recommend actionable steps our operators can take to improve our users' parking experience at their facility.

2.5%

Starting Point

Amount of ratings including qualitative feedback data

10%

Goal

Amount of ratings including actionable qualitative feedback data

29%

Actual Result

Amount of ratings including qualitative feedback (in the form of written comments or selected prompts)

Next Steps

Measure and Learn

After collecting a quarter's worth of qualitative feedback data, I enlisted the help of the Data Science team to figure out how to gather themes from the data to relay as actionable steps for operators to improve their facility's ratings.

The Data Science team devised a way to implement AI-generated summaries of common issues reported by users in their written review comments. These AI summaries, along with information from the users' pre-filled prompt selections, will allow for a massive amount of reviews data to be distilled into concise relevant feedback for parking facility operators.

Now that we have reviewed the data points and devised a way to distill it into concise AI-generated summaries, I've handed off the learnings to the Operator Panel (B2B) product designers so that they can design a solution to surface this data to operators in their dashboards.

The Operator Panel is working through the following opportunities:

  • How might we relay this reviews data to operators so they can devise a plan of action for making changes?
  • How might we surface some of this user-provided feedback in the customer-facing search experience to help drivers make a decision about where to park?

Overall, this project was great experience for me to execute several different research strategies from conception to post-release and I'm proud of how user-centric and data-driven my process was with this project.

Interested in working together? Get in touch today.