Data for improvement
When testing your change ideas with PDSA cycles, you need to start collecting data to be able to determine if the changes made have resulted in an improvement. This data will be collected in 'real time' rather than retrospectively. Most of this data will be quantitative but qualitative data can be equally informative.
What data do you need to collect?
One measure alone is insufficient to determine if improvement has occurred. You are advised to include one or two measures from Outcome, Process and Balancing Measures (these are known as the Family of Measures).
Outcome measures are closely aligned with your aim statement or the overall impact you are trying to achieve. They relate to how the overall process or system is performing.
For example, if your aim is to within 12 months, reduce acute readmissions for intentional self-harm to the same acute psychiatric
- Rate of acute readmissions of intentional self-harm within 28 days of separation from your unit for patients on a suicide prevention pathway.
You should also define the numerator and denominator and provide an operational definition for each measure to ensure data consistency. For example -
- Numerator: Overnight separations from your unit that are followed by an overnight readmission of intentional self-harm to the same acute psychiatric inpatient unit within 28 days
- Denominator: Number of overnight separations from your unit
- Operational definitions:
- Each admission can only have one readmission within 28 days for the reporting period. Any subsequent readmission within the reporting period is only counted as a readmission against the admission immediately preceding it.
- Intentional self-harm:
- A principal diagnosis in the ICD-10-AM range S00-T75, T79 (Injury, poisoning and certain other consequences of external causes)
- The first reported external cause code in the record in the ICD-10-AM range X60–X84, Y87.0 (external causes of morbidity) - AIHW
- Limitations/Considerations when interpret:
- The variation may change dramatically if your unit/facility has small number of separations.
- The accuracy relies on clinical documentation of the diagnosis
Clinical audits may be required (Manual Data Collection). Sample size consideration – How much data do you need to collect?
Please seek further advice and support from Local Clinical Governance Unit or Clinical Informatics Department.
Process measures are the parts or steps performed in the process. They are logically linked to achieve the intended outcome or aim. For example, if your aim is to improve implementation of the suicide prevention pathway, your process measures could be:
- Proportion of consumers identified for a suicide prevention pathway
- Proportion of consumers placed on a suicide prevention pathway
- Proportion of staff trained on suicide care pathway.
Balancing measures look at the system from different directions or dimensions. They determine whether changes designed to improve one part of the system are causing new problems in another part of the system. For example, if your aim is to reduce readmission rate, these could be:
- Average length of stay
- Consumer satisfaction – e.g. YES survey
- Staff satisfaction – e.g. Nett Promoter Score (for development)
- Team/Service safety culture – e.g. SAQ, Team Psychological Safety Tool.
The Suicide Prevention Quality Improvement Framework provides further examples of outcome, process and balancing measures that can be adapted to your context.
What should you consider before collecting data?
Before commencing PDSA cycles, you should:
- Consider consulting your QI advisor before starting the data collection process
- Review any baseline or existing data on the performance of the process to be improved - the Institute for Healthcare Improvement suggests conducting a baseline audit on 30 patients for the measure you want to improve, prior to implementing change ideas (you may have already done this when collecting baseline data for your case for change)
- Agree upon what should be measured – this includes the who, when, where and how the data will be collected for each measure
- Determine the most efficient way to access and collect the data
- Consider how useful the data will be and how you will present it (don't collect unnecessary data that won't be used)
- Decide where to record data and how it will be accessed by the team (for example, spreadsheet, QIDS)
- Consider assigning responsibility to individual team members for data collection for each measure
- You will still need to continue collecting data after the project to check that the improvements are sustained.
The key to data collection is not quantity. Rather than collecting a big sample size, you want to make sure the data is project specific and collected continuously so it is meaningful to the present.
You need to make sure to collect enough data to be able to understand if the changes you are making are resulting in an improvement – too little data and you won't be able to see improvement and too much is an over-investment of time and resources.
As a minimum it is recommended you collect between five to ten data points each week (for example, collect data on 5 to 10 consumers). This will vary depending on the size of your health service and the frequency of the problem.
Regardless, it is recommended that the data you collect is either consecutive (for example, the first 5 consumers) or random. Speak to your local quality improvement advisor about how much data to collect.
How to make sense of and present your data?
Once data has been collected and entered in a spreadsheet or QIDS, you need to interpret the data in a meaningful way to determine if an improvement has occurred. In QIDS you will be able to easily build different charts suited to your improvement project. The Suicide Prevention Quality Improvement Framework provides further examples on what type of charts you should consider for your project.
Run charts are line graphs showing data over time. Run charts are an effective tool to tell the project story and communicate the project's achievements with stakeholders. Run charts illustrate what progress has occurred, what impact the changes are having and ultimately, if improvement is happening. Including annotations in your run chart will help to show when change ideas have been tested and may be associated with an improvement.
There are specific rules to interpreting run charts which can be found via the CEC Academy. Your local QI advisor may be able to assist with the display and analysis of data.
Determining if improvement has really happened and if it is lasting requires observing patterns over time. Probability-based rules are helpful to detect non-random evidence of change.
For more information on types of data, minimum data point and the probability-based rules visit the CEC Academy. It is recommended that you contact your local quality advisor for assistance.
For example, if you are using a run chart, an improvement is considered reliable when six consecutive data points are above 95%, that is, compliance with the new process implemented occurs 95% of the time.