Products About Blog

Qualitative data: uncomfortable, but worth it.

I’m an avid reader. I love books. But sometimes I struggle to choose which ones to read.

My first instinct is to look at the numbers: star ratings, page numbers, and so on. But while these help me compare data, they don’t help me understand why one book has “better” numbers than others. For that, I read reviews. Combining numbers with stories helps me make better decisions.

Designing online services is similar. It’s hard to do. It’s even harder to do without data.

Working with our partners at the Royal Canadian Mounted Police (RCMP), we’re prototyping and testing different tools to help people impacted by cybercrime report their experience to the RCMP. These prototypes don’t actually report to the police, but they mimic what a victim might see as they click through a potential reporting service.

At the end of this phase, we’ll look at the data with our partners to understand which features work well. The data may tell us that none of them work well, in which case we’d return to the drawing board. And that’s okay! (Data-informed decision making, woohoo!)

What we’re measuring

To help us decide if features work or not, we’re measuring two things:

  1. Clarity. Do people understand what they can use the service to do?
  2. Value. If the purpose of the service is clear, do people find it useful? Does it meet their needs?

Since “clarity” and “value” can be hard concepts to measure, we had to adapt our testing. In the early stages, we focused on gathering rich qualitative data. As we move forward, we’ll also gather quantitative data.

Now: qualitative stories

Early in Alpha, we ran small-scale, qualitative testing sessions.

During these sessions, we sat one-on-one with victims and potential victims of cybercrime. While they used the prototype, we asked questions about their experience.

Our first test let victims report a suspected scam with minimal effort. First, they enter one “identifier” tied to a scam (an email address, a phone number, or a website) in the prototype. In return, they learn how many other people reported the same scam as them.

Clarity

To qualitatively evaluate “clarity”, we gauged people’s comfort with going through a flow, like the example above. Could people accurately explain what the tool was asking them to do? Did our expectations align to their reality? (It’s humbling when the answer is no.)

To figure that out, we asked questions like:

  • (On each page of a flow) “Can you describe what you see? What do you expect will happen next?”
  • (After moving to the next page of a flow) “Is this what you expected to see?”
  • (After going through the prototype) “Can you tell me what this website was for? Was there anything on this website that you found unclear or confusing?”

Value

Once a prototype was clear for participants, we started exploring whether it was valuable for them. To do this, we spoke with them about their personal experiences with cybercrime. Then, we discussed whether the feature would help meet their needs.

For example, we asked, “Did you find this helpful? Why or why not?”

What we’ve discovered so far

Here’s a link to each prototype we used, and our findings after testing them a few times:

Date Prototype Clarity Value
April 9, 2019 No demo (code) Low n/a (couldn't determine due to low clarity)
April 11, 2019 Demo 1 (code) Medium (higher than first version) Mixed (where there was value, was caveated)
May 9, 2019 Demo 2 (code) Medium (mixed understanding of what was high and low urgency) Medium to high

Sometimes what we put in front of participants wasn’t clear for them. For example, the first feature we prototyped was unclear for five out of the six people. But that doesn’t mean we failed. It means we were given an opportunity to learn a lot. It also means we got to improve the service in a risk-free environment before we offer it to the public.

When we tested the new prototype, clarity was higher, so we could assess value. Results were mixed: while some victims responded well to a low-effort reporting process, others expected to provide more details. Everybody appreciated the final screen, where we thanked them for their report.

Uncomfortable, but worth it

As a structured data fan, diving into qualitative data was uncomfortable at first. My head understands the black and whiteness of numbers well, but stories are a little more grey.

By participating in these in-depth sessions, I saw the benefits of including a qualitative approach. While our evaluations of clarity and value were less quantitatively comparable, we could more confidently explain the why behind each participant’s experience.

(Big props to our team’s researcher, Mel, for developing our research activities, facilitating sessions, and ensuring the whole team can participate so we see first-hand the impact of our efforts.)

Next: quantitative numbers

While our qualitative approach allows us to understand the why, it’s a bit harder to directly compare prototypes. Numbers would complement the stories, so as we move forward, it’s time to quantify! (cue corny music)

In the weeks ahead, we’ll follow our format of starting small, clarifying the prototype, and then running more research. This time, we’ll go from small-scale qualitative testing (5–10 people) to large-scale quantitative testing (50–100 people). This will give us data to more confidently compare the usability of different features.

It can be daunting to design a service that meets people’s needs, especially when the people you’re serving are in a vulnerable position, like suffering from the impacts of cybercrime. But we can increase our confidence by combining qualitative and quantitative data.

However uncomfortable it can be at first, this combination is worth the challenge.