skip to navigation | skip to content

The Clinical Excellence Commission The eChartbook


 Data sources and methods CEC Indicators Introduction eChartbook
 eCHARTBOOK PROGRAM

Collections
Details of the reviewed definition of indicators presented in the eChartbook portal are provided in "definitions" tabs for each individual indicator. The data on which eChartbook portal indicators have been based is derived from a number of sources. Many of the Clinical Excellence Commission (CEC) related indicators were derived from either directly from CEC project related data or from NSW Ministry of Health (MoH) provided data to CEC. Some of the NSW MoH data collections were accessed through the SAPHaRI/HOIST (SAPHaRI: Secure Analytics for Population Health Research and Intelligence;HOIST: Health Outcomes Information Statistical Toolkit), which is a SAS based 'data warehouse' operated by the Centre for Epidemiology and Evidence of the NSW MOH. SAPHaRI/HOIST brings together most of the data collections used in population health surveillance in NSW, and contains all the available historical data for each collection. SAPHaRI/HOIST provides a common data analysis environment across the public health network in NSW and Health Statistics NSW. A brief description of the main data collections used for eChartbook indicators are described below.


Between the Flags (BTF)
Between the flags (BTF) is one of the CEC Program. Key Performance Indicators (KPIs) were added to the Local Health District's service agreements with the Ministry of Health (MoH) in July 2010 to evaluate the BTF Program. The two KPIs are the number of Rapid Response Calls/1000 separations and the number of Cardiorespiratory Arrest Calls/1000 separations. The numerators are collected monthly by the individual health facilities and reported through the Clinical Governance Units. The denominators are supplied by the MoH and are taken from the Health Information Exchange (HIE).


Collaborating Hospitals' Audit of Surgical Mortality (CHASM) Data Collection
The Collaborating Hospitals' Audit of Surgical Mortality (CHASM) is a systematic peer-review audit of deaths of patients, who were under the care of a surgeon at some time during their hospital stay in NSW, regardless of whether an operation was performed. CHASM is funded by NSW Health, administered by the CEC and co-managed by the NSW State Committee of the Royal Australasian College of Surgeons (RACS). Its audit methodology is based on the Scottish Audit of Surgical Mortality, developed in 1994. It is similar to the other surgical mortality audits being implemented in Australia, under the Australian and New Zealand Audit of Surgical Mortality (ANZASM) framework. Clinical audit managers (CAMs) or their equivalents at local health districts (LHDs) provide fortnightly or monthly notifications of surgical deaths to CHASM. CHASM then sends a self-administered questionnaire (SCF) to the consultant surgeon to request information about the death. Consultant surgeons may notify the CHASM office directly of deaths that have occurred under their clinical care by completing a SCF, available from the CHASM office and website and from the CAM or equivalent staff at LHDs. All patient, hospital and surgeon identifiers on the completed SCF are removed, before the form is sent to a first-line assessor for review. The assessor is selected from the same surgical specialty, but a different LHD, as the treating consultant surgeon. The first-line assessor makes an assessment of the reported death from the information submitted on the de-identified SCF. He/she then completes the assessment form and returns it to CHASM. For cases that do not require further information, the audit findings are coded and entered in a database. The notifying surgeon receives a confidential feedback letter from the CHASM committee on the outcome of the review. For cases where there is either insufficient detail, or potential deficiencies of care have been identified, a case note review is requested. This comprises a full medical case note review. At this stage anonymity is no longer feasible. The notifying surgeon receives confidential and privileged feedback from the CHASM committee, based on the assessor's comments. All second-line assessment reports are de-identified and distributed to CHASM committee members for noting.


Hand Hygiene
Data for hand hygiene is maintained and analysed by the CEC. Auditing of hand hygiene compliance is used to evaluate hand hygiene behaviour within the workplace and to identify opportunities for education and promotion of correct hand hygiene. Audit data is submitted to the Clinical Excellence Commission and analysed three times a year. Hand hygiene compliance is measured at ward, hospital and local health district level by the Clinical Excellence Commission and nationally by Hand Hygiene Australia. Data for eChartbook is presented by newly formed local health districts (LHD). Auditing of hand hygiene compliance in NSW is according to the 5 Moments for Hand Hygiene methodology developed by Hand Hygiene Australia, which is based on the World Health Organization program Save Lives Clean your Hands. Auditors undergo standardised face-to-face training and must attain a pass mark of 90 per cent to be validated to observe and audit hand hygiene behaviour. Auditing of observed hand hygiene Moments includes hand washing or use of alcohol rub at one of the 5 identified moments for hand hygiene, before or after patient contact. Compliance can be defined as either washing hands with soap and water or rubbing with an alcohol rub in accordance with a hand hygiene Moment.


Healthcare Associated Infection (HAI) Data
The data for central line associated bloodstream (CLAB) infections in intensive care units (ICU), Staphylococcus aureus (S. aureus) bloodstream (SAB) infection, Methicillin resistant S. aureus (MRSA) and Methicillin sensitive S. aureus (MSSA) are calculated based on NSW Healthcare Associated Infections (HAI) Data Collection. HAI data was forwarded to CEC by NSW MoH. All NSW public hospitals collect the HAI related data through a standard manual (Healthcare Associated Infections: Clinical Indicator Manual, Version 2; http://www.health.nsw.gov.au/quality/hai/tp_surveillance.asp).The manual outlines standardised case definitions, denominators and reporting requirements to ensure consistency across all areas and facilities. Definitions are consistent with those recommended by national and international authorities. Denominators vary for different indicators; in the near future some (hospital separations or occupied bed days) will be obtained routinely by NSW MoH and others (central-line and ICU bed days; number of surgical procedures) by each facility.


Incident Information Management System (IIMS)
Data for the Incident Information Management System (IIMS) is jointly maintained by the CEC and NSW Health and analysed by the CEC. The objective of IIMS is to provide an electronic system that records all healthcare incidents, adverse events and near-misses, in four categories:

  •  clinical (patients)
  •  complaints
  •  staff /visitor/contractor (occupational health and safety)
  •  property/hazard/security.

 This data assists managers to manage the incidents that occur in their area, records the results of reviews or investigation of incidents and provides reports on all incidents that have been recorded in the system. The system was deployed to each of the former area health services in November 2004 and collects information from the 17 local health districts within NSW (including Justice Health, The Children's Hospital Network, St. Vincent's Hospital Network and Ambulance Service NSW) for clinical incidents that occurred in the period under review.

 

The Quality Systems Assessment (QSA) Survey 2011
The Quality Systems Assessment (QSA) is a clinical risk management program with a focus on learning and improvement and a flagship CEC program, as well as being a key component of the NSW Patient Safety and Clinical Quality Program (PSCQP). The QSA has been developed to provide clinicians and managers with a convenient and accurate means for determining compliance with policy and standards, identifying clinical risks and deficiencies in practice and highlighting and sharing exemplary practice relating to clinical quality and patient safety. This fourth Quality Systems Assessment (QSA) statewide survey 2011 focuses on sepsis, delirium, mental health and paediatrics. The QSA 2011 self-assessment survey was undertaken by over 1,500 respondents across, and at various levels of, the health system with an overall response rate of 99 per cent. All clinical departments, units and facilities are expected to participate in the QSA. They are also expected to involve as many members of the management or clinical team as possible in formulating responses to provide a balanced risk assessment. In 2011, at the clinical unit level 79 per cent of respondent units involved more than one person in completing the self-assessment. This is an increase from 51 per cent in 2010


Special Committee Investigating Deaths Under Anaesthesia (SCIDUA) Data Collection
The Special Committee Investigating Deaths Under Anaesthesia (SCIDUA) is an expert committee appointed by the Minister for Health. From 01 September 2012, the NSW Public Health Act 2010 requires the health practitioner who is responsible for the administration of the anaesthetic or sedative drug,where the patient died while under, or as a result of, or within 24 hours after the administration of an anaesthetic or sedative drug for a medical, surgical or dental operation or procedure, to report the death to the Director General via the SCIDUA. All reported deaths are individually reviewed by the 2 or 3 member triage sub-Committee, which can either classify the death as due to factors not falling under the control of the health practitioner or request further information from the reporting health practitioner, using the SCIDUA questionnaire.


Sepsis
Sepsis is a world-wide public health issue which claims thousands of lives each year. Appropriate recognition and timely management of patients with severe infection and sepsis is a significant problem in NSW hospitals and health care organisations around the world. The Sepsis Kills program of the Clinical Excellence Commission (CEC) is based on quality and safety principles and provides a framework to collaborate with clinicians to improve the recognition and treatment of sepsis. The goal is to reduce preventable harm to patients with sepsis through early recognition and prompt treatment. The Sepsis Kills Program has been successfully implemented in Level 3-6 emergency departments across NSW since May 2011. The key measures are the time taken from triage to start the first dose of intravenous antibiotics and the time taken to commence the second litre of intravenous fluid (adults) or completion of the first 20ml/kg bolus of fluid (paediatrics). The CEC web-based Sepsis Data Collection enables real-time hospital, local health district and NSW statewide sepsis data review.


Age-adjusted Rates
Where indicators have been standardised for age and sex, the direct standardisation method has been used. This method adjusts for effects of differences in the age composition of populations across time or geographic regions. The directly age-standardised rate is the weighted sum of age-specific(five-year age group) rates, where the weighting factor is the corresponding age-specific standard population. For this report, the Australian estimated residential population (persons) as at 30 June 2001 was used as the standard population. The same population was used for males and females to allow valid comparison of age-standardised rates by sex. Ninety-five per cent confidence limits around the directly standardised rates were calculated, using the method described by Dobson et al., (1991). Where an indicator relates to mortality following a particular event or procedure, the standard population used is a relevant population experiencing the event or procedure. Direct standardisation is also used for Aboriginal populations as described by the AIHW and ABS (AIHW, 2011d).


Small numbers
Although directly standardised rates can summarise trends across strata, this method is considered statistically unreliable when based on less than 20 events in any one age group. When the rates are based on only a few cases or deaths, it is almost impossible to distinguish random fluctuation from true changes in the underlying risk of disease or injury. For this reason, it is important to know the count of the underlying events from which the rate or percentages were derived. If the number of events is small, the 95 per cent confidence intervals of the rates become very wide and should be interpreted with caution. Comparisons over time or between communities that are based on unstable rates can lead to spurious conclusions about differences in risk which may or may not be valid. Standard approaches include using larger geographic catchments, aggregating and averaging data over several years, or omitting the data. In some places in the eChartbook, the CEC has adopted these approaches, but confidence intervals remain wide, caution should be exercised when comparing annual LHD - or other - data.