The performance of hospitals is again in the news, thanks to the Australian Institute of Health and Welfare’s release today of Australian Hospital Statistics 2009-2010. You can download the full report here, and the Institute’s own summary is reproduced at the bottom of this post.
It seems, on an admittedly quick reading, that the bulk of the report’s focus is on throughput and process, rather than safety and quality of care, or patient outcomes. I don’t know about you but if I was contemplating elective surgery, I’d be at least as interested in my chances of picking up an infection in hospital as in how long I have to wait for the operation.
Meanwhile, Philip Davies, Professor of Health Systems and Policy at the University of Queensland, warns against reading too much into the measure that has already popped up in newspaper headlines – elective surgery waiting times.
***
Let’s move on from our obsession with surgery waiting times
Philip Davies writes:
Today sees the publication of AIHW’s latest report on Australian Hospital Statistics. Doubtless it will trigger another round of journalistic hand-wringing and governmental self-congratulation as we try to figure out how well our public hospitals are performing.
One statistic that will inevitably be the focus of attention is the increase in median waiting times for elective surgery from 32 days in 2005–06 to 36 days in 2009–10.
According to Adam Creswell, writing in today’s Australian, that’s “putting the effectiveness of the federal government’s hospital rescue measures under renewed scrutiny”. Or does it merely confirm that “hospitals are targeting those patients who have been waiting the longest”, in a quote that Adam attributes to a spokesman for Minister Roxon?
It’s encouraging to see the debate about access to elective surgery has moved from focusing on the size of the waiting list and now considers how long people have to wait. The size of a waiting list is irrelevant unless we know the rate at which people are joining and leaving it.
But is our apparent obsession with waiting times really any better?
It’s long been acknowledged that the length of time someone will have to wait for surgery affects the likelihood that they will join a waiting list. Waiting time figures reflect the demand for surgery and not the need. There are no fixed criteria that dictate whether or not a patient should be referred for elective surgery.
Evidence suggests that GPs are less likely to refer patients to hospital when waiting times are longer and are more likely, instead, to continue to manage their conditions in the primary care setting. That, in turn, means that waiting times tend to be ‘self-limiting’: as they increase, the apparent ‘demand’ for elective surgery falls which, in turn, means waiting times come back down again.
There are other factors at work too. A 2003 report (PDF alert) into waiting times in OECD countries suggested that longer waiting times might encourage more potential patients to use private hospitals, or drive public hospitals to make better use of available capacity: both factors that would reinforce the feedback loop from longer waists to an apparent drop in demand for public hospital surgery.
In short, variations in waiting times are largely meaningless as a measure of hospital performance. We should treat them with both caution and scepticism. Let’s hope they don’t feature too prominently in the new National Health Performance Authority’s performance indicators for public hospitals.
***
More detail on the report…
Below is the Institute’s summary of the report – interestingly, it makes no significant or explicit mention of safety and quality performance indicators, and neither does the press release.
Summary
There were 1,326 hospitals in Australia in 2009–10. The 753 public hospitals accounted for 67% of hospital beds (56,900) and the 573 private hospitals accounted for 33% (28,000), these proportions are unchanged from 2008–09.
Accident and emergency services
Public hospitals provided about 7.4 million accident and emergency services in 2009–10, increasing by 4% on average each year between 2005–06 and 2009–10. Overall, 70% of patients were seen on time in emergency departments, with 100% of resuscitation patients (those requiring treatment immediately) being seen within 2 minutes of arriving at the emergency department.
Admitted patient care
There were 8.5 million separations for admitted patients in 2009–10—5.1 million in public hospitals and almost 3.5 million in private hospitals. This was an increase of 3.2% on average each year between 2005–06 and 2009–10 for public hospitals, and 5.0% for private hospitals.
The proportion of admissions that were ‘same-day’ continued to increase, by 5% on average each year between 2005–06 and 2009–10, accounting for 58% of the total in 2009–10 (51% in public hospitals and 68% in private hospitals). For overnight separations, the average length of stay was 5.9 days in 2009–10, down from 6.2 days in 2005–06.
About 4% of separations were for non-acute care. Between 2005–06 and 2009–10, Rehabilitation care in private hospitals increased by 19% on average each year and Geriatric evaluation and management in public hospitals increased by 11% on average each year.
Readmissions to the same public hospital varied with the type of surgery. There were 24 readmissions per 1,000 separations for knee replacement and 4 per 1,000 separations for cataract surgery.
Elective surgery
There were 1.9 million admissions for planned (elective) surgery in 2009–10. There were about 30 separations per 1,000 population for public elective surgery each year between 2005–06 and 2009–10; rates for other elective surgery increased from about 49 per 1,000 to 55 per 1,000 over that time. Half of the patients admitted for elective surgery in public hospitals waited 36 days or less after being placed on the waiting list, an increase from 32 days in 2005–06.
Expenditure and funding
Public hospitals spent about $33.7 billion in 2009–10. Adjusted for inflation, expenditure increased by an average of 5.4% each year between 2005–06 and 2009–10. In 2008–09, states and territories were the source of 54% of funds for public hospitals and the Commonwealth government funded 38%. This compared with the figures of 54% and 39%, respectively, in 2007–08.
Between 2005–06 and 2009–10, public patient separations increased by 2.8% on average each year, those funded by private health insurance increased by 6.4%, while those funded by the Department of Veterans’ Affairs decreased by 1.3%.
***
Meanwhile, the AMA press release, titled “Public hospitals – not much bang for the big bucks”, says the report shows “the Government’s spending on public hospitals has delivered a very small return on a huge investment over four years”.
While we’re on the subject of “return on investment”, what about some serious analysis of health returns on the investment in the Medicare Benefits Schedule (and the earnings differential within medicine), or of the health returns on the investment in private health insurance incentives, or the relative health returns on investment in hospital spending versus primary health care spending versus population health interventions…
***
Update: Thanks to Lisa Ramshaw (see comments below) for pointing out the relevant section of the report (from p 29) re adverse events:
Performance indicator: Adverse events treated in hospitals
Adverse events are defined as incidents in which harm resulted to a person receiving health care. They include infections, falls resulting in injuries, and problems with medication and medical devices. Some of these adverse events may be preventable.
Hospital separations data include information on diagnoses, places of occurrence and external causes of injury and poisoning that can indicate that an adverse event was treated and/or occurred during the hospitalisation. However, other diagnosis codes may also suggest that an adverse event has occurred, and some adverse events are not identifiable using these codes.
In 2009–10, 4.9% of separations reported an ICD-10-AM code for an adverse event. The proportion of separations with an adverse event was 5.8% in the public sector and 3.7% in the private sector (Table 3.5). The data for public hospitals are not comparable with the data for private hospitals because their casemixes differ and recording practices may be different.

In the public sector, about 55% of separations with an adverse event reported Procedures causing abnormal reactions/complications and 34% reported Adverse effects of drugs, medicaments and biological substances.
In the private sector, about 71% of separations with an adverse event reported Procedures causing abnormal reactions/complications and 26% reported Complications of internal prosthetic devices, implants and grafts.
The data presented in Table 3.5 can be interpreted as representing selected adverse events in health care that have resulted in, or have affected, hospital admissions, rather than all adverse events that occurred in hospitals. Some of the adverse events included in these tables may represent events that occurred before admission. Condition onset flag information (see Appendix 1) could be used in the future to exclude conditions that arose before admission and to include conditions not currently used to indicate adverse events, in order to provide more accurate estimates of adverse events occurring and treated within single episodes of care.
Performance indicator: Unplanned/unexpected readmissions within 28 days of selected surgical admissions
‘Unplanned or unexpected readmissions after surgery’ are defined as the number of separations involving selected procedures where readmission occurred within 28 days of the previous separation, that were considered to be unexpected or unplanned, and where the principal diagnosis related to an adverse event (see above). The measure is regarded as an indicator of the safety of care. It could also be regarded as an indicator of effectiveness of care; however, the specifications identify adverse events of care as causes of readmission, rather than reasons that could indicate effectiveness.
Rates of unplanned or unexpected readmissions were highest for Hysterectomy (31 per 1,000 separations) and Prostatectomy (30 per 1,000) (Table 3.6). For Cataract extraction, fewer than 4 in 1,000 separations had a readmission within 28 days.

Interesting – why, I wonder, is the rate of unplanned readmission so much higher after prostatectomy and hysterectomy? How do these rates compare with other common procedures not mentioned in the table? And why are people more likely to have an unplanned readmission after knee replacement than after hip replacement? Do we know if these indicators are improving?
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.