Products About Blog

Developing an evaluation framework for product and service delivery

If you’re someone who follows or works with CDS you are probably interested in how we evaluate the success of our products and partnerships.

Evaluating our evaluation

We’ve been practicing evaluation of our delivery work from day one of CDS. Through trial and error, we have got to a point where we are beginning to feel confident that we have an effective approach to evaluation.

Initially, we were trying to measure the same things across all our products, so that we could compare them. We wanted to know the total number of users, completion rates, user satisfaction levels, and cost per transaction over certain time periods.

These are reasonable key performance indicators but, in practice, we were going for long periods before the data became available for use. The comparisons were forced because not all products are alike at the feature level. Fundamentally, these measurements weren’t telling us about how we were meeting user needs.

So we evaluated our evaluation. We made some changes that are beginning to pay off in determining whether we are meeting user needs with what we deliver and fulfilling our mandate.

Consistent, not uniform

Now, instead of a very specific and uniform set of measurements, we’re working with 4 broad categories:

  • Outcomes - are we making people’s lives better?
  • Capacity building - are departments using new principles, technologies and techniques?
  • Product performance - is the product working as we expect?
  • Delivery - how are we using our resources to make the thing?

Every delivery team is required to track and report on each of these categories on a regular basis.

The metrics each team uses is their decision. Teams will base their choices on factors like what the service is, what the user needs are, what we are doing to make sure the service better meets the needs, and who we are working with in the partner department.

So, for example, an outcome metric might be we helped 56,000 more people get a benefit resulting in $4.5 million for the people that need it; or we reduced application processing times by 75% resulting in faster turnaround times for applicants. A metric for partner capacity might be that we helped a department to set up a multidisciplinary delivery team, or a partner department made 80% of the pull requests by the completion of their partnership with CDS.

Product performance metrics can be things like number of users per day was 300 or the time to interactive for a page is within 70 milliseconds. And, delivery metrics could be that a team’s velocity increased by 12% on a sprint by sprint basis, or we reduced the cost of a Beta phase by $15,000 over the same period of time.

Whatever the measurements that are chosen, they need to be actionable. That is, they need to help us make decisions, understand our progress, and demonstrate our return on investment. Getting that right takes experimentation.

Matching metrics to delivery phases

Evaluation data becomes available at different stages of delivery and we expect teams to iterate on their evaluation plans.

In Discovery research, the delivery team is exploring what outcomes the people who use a service need, but can begin tracking the cost of delivery from day one.

In Alpha, they can begin tracking what partners need to be successful and explore issues with the product’s performance.

In Beta, the outcome metrics can start to become available and the team can start benchmarking across all four categories. Then once the product is live, the metrics can be used to determine what the routine ‘looks like’ and how to operate the product efficiently.

Exploring evaluation goals, selecting measurements and developing a plan is something we empower delivery teams to do. We expect evaluation to be part of every day, sprint and phase. There are roles like researchers and analysts, who specialise in evaluation but we also expect that every member of the team plays a part, from the researcher to the service owner, in conducting and acting on evaluation.

Getting better at evaluation

We’re rolling this evaluation framework out across our product and service delivery. We’ve developed guidance and templates to support the framework, which we’re testing with CDS teams and our partners. We plan to iterate based on feedback, and then release these assets for use and further improvement by others beyond CDS.

Another feature will be the increasing regularity with which we share the evaluation plans and results publicly. For an example of how this all looks in practice, you can read about the approach taken by the team making it easier for Veterans to find benefits.

Working in the open is important to CDS, our partners and for the Government of Canada’s work to change the way it delivers services to serve people better.