Stripe 4 - Continuous learning

In a learning environment the next step is to ask what has been learned and what can be improved now?

At about 12-18 months post start up, review the discovery phase data and repeat the evaluations to understand the new current state.

Once an improvement project is completed and the improvement has been sustained and spread, it is the responsibility of the team to continually monitor the ongoing success. Measurement, reporting and evaluation ensures that clinical practice changes are being carried out and provides a source of feedback and learning.

What is a Family of Measures?

One measure alone is insufficient to determine if improvement has occurred. You are advised to include one or two measures from each of the following three categories.

It is important to define the numerator and denominator and provide an operational definition for each measure to ensure data consistency. (See example on outcome measures)

The outcome, process and balancing measures provided are examples and can be adapted to your context.

What to consider before collecting data?

Think critically about the data you collect, such as, how much, where to record and who can assist.

Tips
  • Review any baseline or historical data on performance of the process to be improved.
  • Agree upon what should be measured – this includes the who, when, where and how the data will be collected for each measure
  • Determine the most efficient way to access and collect the data
  • Consider how useful the data will be and how you will present it (don't collect unnecessary data that won't be used)
  • Decide where to record data and how it will be accessed by the team (for example, spreadsheet, QIDS - preferred)
  • Consider assigning responsibility to individual team members for data collection for each measure
  • Make sure to speak with staff and patients to hear about their experiences whilst you are testing
  • You will still need to continue collecting data after the project to check that the improvements are sustained.

Outcome measures are closely aligned with your aim statement or the total impact you are trying to achieve. They relate to how the overall process or system is performing.

Examples:

  • Baseline and 12-month SAQ
  • Reduced incidents especially related to communication breakdown
  • Executive satisfaction
  • Staff retention / satisfaction.

Process measures are the parts or steps in the process performing as planned. They are logically linked to achieve the intended outcome or aim.

Example:

  • % of team members who completed the SAQ
  • Number of Safety Fundamentals commenced
  • Number of action plan recommendations completed within the suggested timeframe
  • Number of staff trained in improvement science.

Balancing measures look at the system from different directions or dimensions. They determine whether changes designed to improve one part of the system are causing new problems in another part of the system.

Example:

If the aim is to reduce inpatient length of stay then a balancing measures would be to;

  • Make sure readmission rates do not increase
  • Ensure patient experience of care is not negatively affected.

The key to data collection is not quantity. Rather than collecting a big sample size, you want to make sure the data is project specific and collected continuously so it is meaningful to present.

You need to make sure to collect enough data to be able to understand if the changes you are making are resulting in an improvement – too little data and you won't be able to see improvement and too much is an over-investment of time and resources. It is recommended that the data you collect is either consecutive (for example, the first five patients) or random. Speak to your local quality improvement advisor about how much data to collect.

How do you make sense of and present your data?

Once data has been collected and entered in a spreadsheet or QIDS, you need to interpret the data in a meaningful way to determine if an improvement has occurred. QIDS has the functionality for you to easily build different charts suitable for your improvement project.

Run charts are line graphs showing data over time. Run charts are an effective tool to tell the project story and communicate the project’s achievements with stakeholders. Run charts illustrate what progress has occurred, what impact the changes are having and ultimately, if improvement is happening. Including annotations in your run chart will help to show when change ideas have been tested and may be associated with an improvement.

There are specific rules to interpreting run charts which can be found via the CEC Academy webpages. Your local QI advisor may be able to assist with the display and analysis of data.

There are a number of different charts (for example, Histogram, SPC Chart etc) which can be used to present your data. Visit the CEC Academy webpages for more information.

Determining if improvement has really happened and if it is lasting requires observing patterns over time. Probability-based rules are helpful to detect non-random evidence of change.

For more information on types of data, minimum data points and the probability-based rules visit the CEC Academy webpages. It is recommended that you contact your local quality advisor for assistance.