Badrick, Gay, McCaughey, and Georgiou: External Quality Assessment beyond the analytical phase: an Australian perspective

Introduction

Pathology is a crucial clinical tool, estimated to contribute to 60-70% of all critical decisions involving patient treatment (1). Despite this potential value, the Carter review estimated that 25% of pathology requests are unnecessary or inappropriate (2). Furthermore, CareTrack Australia examined the appropriateness of care provided in Australia for 22 common conditions and demonstrated that only 57% of patients received what was regarded as appropriate care (3). The cost of diagnostic test services in Australia rose to $5.25 billion in 2013. For pathology services, this represented an increase of 81% in the decade to 2013 (4). This rise has led to major concerns about the substantial costs and risks associated with unnecessary tests and incorrect result interpretation. For pathology services to be of value, the correct ordering of tests and interpretation of results is crucial. This is the responsibility of the treating clinician, and as such, can be considered as the pre- and post-laboratory or diagnostics phase (5).

External Quality Assessment (EQA) is the verification, on a recurring basis, that laboratory results conform to expectations for the quality required for patient care (6). However, Australian laboratories tend to be focussed on very narrow concepts of EQA, even though the significance of pre and post laboratory errors is now widely recognized. This can be partly attributed to the fact that laboratories are largely sample, as opposed to patient, oriented.

If the broader concept of Quality Management Systems (QMS) is primarily aimed at meeting customer requirements and enhancing customer satisfaction, then it is clear that the quality of the product of a laboratory is taken as a given by referrers, and enhancing clinician, patient and payer satisfaction extends far beyond the traditional boundaries of the laboratory. One way to improve the quality of service laboratories provide is to extend laboratory based EQA programs to the requesting and reporting phases, which are outside the current scope of pre-analytical, analytical and post-analytical EQA programs (7). Whilst it is recognised that some countries have made steps in this direction, this is far from widespread, and currently lacking in Australia. The pre-pre-analytical phase, which is primarily composed of test ordering, and the post-post-analytical phase, which is primarily composed of test result interpretation, can be regarded as the diagnostic phases (as opposed to the analytical phases) and sub-divide it into a pre-laboratory and post-laboratory phase (Figure 1). This terminology has been chosen to remove the laboratory as the focus of the process and shift it back to the referring clinician.

Figure 1

The phases of laboratory testing

bm-27-73-f1

In addition to introducing the diagnostic phase terminology, this opinion paper aims to set out the reasons why pathology laboratories and diagnostic medicine needs a way of monitoring these phases more effectively, with particular reference to the Australian situation. It is the belief of the authors that an EQA program could be developed to identify, learn from and reduce these errors and near misses in a timely fashion, and this will be explored in this article.

Pre and post-laboratory errors

A study conducted by the American Academy of Family Physicians reported that participants submitted 590 event reports with 966 pre- and post-laboratory errors. Pre-laboratory errors occurred in ordering tests (12.9%) and implementing tests (17.9%), while post-laboratory errors occurred in reporting results to clinicians (24.6%), clinicians responding to results (6.6%), notifying patient of results (6.8%), general administration (17.6%), communication (5.7%) and other categories (7.8%). Charting or filing errors alone accounted for 14.5% of errors. While patients were unharmed in 54% of events, 18% resulted in some harm, and harm status was unknown for 28%. Furthermore, these errors led to a range of other adverse consequences including time and financial consequences (22%), delays in care (24%), pain/suffering (11%) and adverse clinical consequence (2%) (8). Therefore, the impact of these pre- and post-laboratory errors demonstrates a pressing need to identify the sources of these errors to facilitate the development of interventions that can reduce the error rate.

While there has been a vast amount of research to identify pre-laboratory error quality indicators (9), there are also significant pre-laboratory errors that we believe have not been included in these indicators. One of these areas of omission is the proportion of patients who are not adherent to a pathology request. It has been estimated that in Australia, approximately 20-30% of patients who are given a pathology request form in the community do not have this request completed (10). There are multiple reasons for this non-compliance including language barriers in communication, low socioeconomic status and poor health literacy such as forgetting important appointments, losing pathology forms and not showing up to or attempting to reschedule the appointment. This has potentially far greater impact on patient treatment than the analytical phase.

Primary care in Australia is the responsibility of General Practitioners (GPs). The impact of the aforementioned non-compliance to test requests has required GPs to adopt complex workflows to remind patients of the need to have an appropriate test before the next appointment. Anecdotally, the non-adherence rates are of a similar range in hospital outpatient clinics. The cost to the community of these wasted appointments is significant. This is one reason why point of care testing, which enables laboratory tests to be performed at the patient location as opposed to a laboratory, may have significant benefits for both patients and health professionals.

Hickner et al. reported that GPs described uncertainty in ordering laboratory tests in approximately 15% of diagnostic encounters (11). The task of selecting appropriate diagnostic testing is challenging for clinicians, in part because of the sheer volume of choices. For example, there are currently over 850 different pathology tests for which the government will reimburse patients in Australia. Therefore, methods to improve this workflow could lead to a significant improvement in the quality of pathology services.

Sikaris has identified the importance of the post-laboratory phase and how it is subject to error, such as the misapplication of appropriate and accurate test results through cognitive failure (12). Laboratory tests and their misinterpretation are still an important contributor to misdiagnosis because of the emphasis put on laboratory testing for diagnosis and monitoring decisions. In the post-laboratory phase the quality of the final report, including its reference intervals, clinical interpretations and notifications based on knowledge from laboratory specialists, should support clinical decision-making. It has been reported that incorrect interpretation of diagnostic tests accounts for up to 37% of malpractice claims in primary care and emergency departments (13).

Audit and dissemination of best practice plays an important role in managing the quality of results interpretation. While audits are not the preferred option for the Australian situation, primarily due to the great distances required to make on-site visits, a number of international studies have examined the quality of results interpretation in general practice. Skeie et al. found that 22% of Norwegian GPs misclassified changes in haemoglobin A1c (HbA1c) for patients with diabetes mellitus (DM) and that the vast majority of GPs assumed that analytical quality was better than it really was (14). The finding of this study are supported by that of Thue and Sandberg (15), who analysed clinician expectation of analytical performance in relation to current analytical performance specifications, finding that clinicians are generally uninformed of the capability of analytical performance. Three subsequent Norwegian studies performed EQA of GP’s interpretation of pathology results and showed general agreement in critical differences (CDs) for blood glucose and HbA1c, with variation in the perceived risk to patients of a severe bleed (16, 17). There was also a large variation in CDs for uric acid and in International Normalized Ratio (INR) interpretation for warfarin monitoring (17, 18). Kristoffersen et al. found that GPs across 13 countries overestimated the risk of ischemic stroke and bleeding in people treated with vitamin K antagonists (VKA) by 2-3 times (19). The results of these studies suggest that guidelines for these conditions may be either unknown or impractical. Hellemons et al., who found that guidelines around the use and interpretation of albuminuria in patients with DM were poorly followed in general practice further support this (20).

EQA programs for the pre- and post-laboratory phases

The Institute of Medicine (IOM) report entitled “To Err is human” identified the types of error that arise in the diagnostic process, namely: failure to employ indicated tests; use of outmoded tests or therapy; failure to act on results of monitoring or testing; treatment error in the performance of an operation, procedure, or test; error in administering the treatment; error in the dose or method of using a drug; avoidable delay in treatment or in responding to an abnormal test; inappropriate (not indicated) care; preventive failure to provide prophylactic treatment; inadequate monitoring or follow-up of treatment; other failures of communication; equipment failure; other system failure (21). Many of these errors, such as failure to order a test, wrong test ordered and failure to recognise urgency, are amenable to the type of EQA programs that are used in laboratory medicine. However, developing EQA programs for the pre- and post-laboratory phases will require consultation and support from the referring doctors. The format of the programs will also need careful construction to ensure that the data collected is de-identified and provides education as well as useful and meaningful data.

The EQA program we propose could take the form of a series of patient scenarios where a response would be required. The Interpretative Comment programs of The United Kingdom National External Quality Assessment Service (UKNEQAS) or Royal College of Pathologists of Australasia Quality Assurance Programs (RCPAQAP) are two such examples; however, the referring doctor would be the participant. The results would be analysed and reported back with a guideline-based response, concordance, and group performance (see Appendix 1 for an example of such a report). Sikaris has described these concepts in terms of a medical laboratory, but they are translatable to a referring doctor model (12). The cases will need to be carefully chosen, however, so that the suggested interpretations in terms of what tests to order based on a given clinical scenario or what treatment to suggest has a strong evidence base in both current literature and current clinical guidelines.

EQA programs for information technology

The purpose of a pathology report is to communicate the results of the test in a clear and unambiguous manner. It is clearly a patient safety issue if a report is misread in a way that may lead to an incorrect understanding of the results. Hickner et al. found that 8.3% of GPs had uncertainty towards interpreting results (11). Challenges included different names for the same test, tests not available except as part of a test panel and different tests included in panels with the same names. While this has been addressed in some countries, it remains a prominent issue in the Australian setting. Additionally if a report is difficult to read, there can be valuable time lost in trying to correctly identify the key elements of the results.

In the modern era, doctors commonly receive pathology reports from a range of different laboratories. Examples include tests requested by a specialist, results from a hospital, results obtained while travelling interstate or overseas or results from a different laboratory attended by the patient for convenience or other reasons. Clearly, uniformity of reporting formats amongst laboratories can be beneficial in making the review of pathology reports easier and safer, irrespective of the testing laboratory (22). Clear and consistent reporting is vital to support safe pathology interpretation. Guidelines aimed at improving the effectiveness of testing have been the subject of standardisation between medical groups for a significant period of time (23). While there has been focus on communication using electronic systems (24, 25), paper reports remain in common use and rendered reports (e.g. portable document formats (PDF) or Pathology Information Transfer protocol (PIT) formats) are still widely used in practice. In 2013, the Royal College of Pathologists Australasia (RCPA) published an initial Standard. A group known as the Australian Pathology Units and Terminology Standardisation Project (APUTS) wrote the draft, and after public feedback, edited and finalised comments. The Standards and Guidelines were released in 2014 to assist in the requesting and reporting of pathology (26).

It is now important that conformity to the aforementioned guidelines and standards be monitored. This can be done through a form of EQA for reports. An EQA organisation is part of the request-result cycle and hence is in a position to perform quality assurance on the laboratory result when the laboratory sends a result back to the EQA. The units, format, reference interval and comment are all a part of the EQA result and hence can be treated as part of the EQA program (Figure 2).

Figure 2

Laboratory messaging in context
The figure illustrates the responsibilities of the laboratory and its interaction with external parties within the pathology test-request-report cycle.

bm-27-73-f2

The Pathology Information, Terminology and Units Standardisation (PITUS) Informatics EQA Project aims to build a system to enable electronic requesting and reporting for an existing RCPAQAP EQA program. The electronic messages involved in the process will be assessed for compliance and conformance to relevant Standards from National Pathology Accreditation Advisory Council (NPAAC), Australian Standards and the RCPA. A rendered PDF version of the report will also be generated and assessed against the format, rules and rendering conformance requirements of the APUTS Standard.

Use of ICT to support EQA in diagnostic phase

Advances in Health Information and Communications Technology (ICT) means that ICT can be used to support EQA and assist clinicians when both ordering and interpreting pathology test results. This combination has the potential to significantly reduce errors in the diagnostic phase of pathology testing. In the pre-laboratory phase, Computerised Physician Order Entry (CPOE) systems allow clinicians to enter laboratory orders directly into a computer system. This can support EQA systems that aim to reduce the chance of errors associated with illegible handwriting, patient identification and specimen collection and labelling, key sources of error in the pre-analytical phase. CPOE systems can also be coupled with clinical decision support, assisting the clinician to choose the most appropriate tests for their patient. However, there is still scant evidence around the impact of such systems on patient outcomes (27). In one of the few studies to date, Georgiou et al. demonstrated that the implementation of a CPOE system led to a reduction in errors associated with mislabelled, mismatched and unlabelled specimens (28). CPOE also led to a reduction in both the number of tests being ordered per episode of patient care and laboratory turnaround time. These findings have a direct impact on patient safety and quality of care as a subsequent study showed that for every five additional tests, emergency department length of stay increased by 10 minutes and that each 30-minute increase in turnaround time was associated with a 17 minute increase in emergency department length of stay (29).

In the post-laboratory phase, ICT can be used to support EQA systems that aim to standardise result reporting, reduce the number of missed test results and improve the quality of pathology result interpretation. The use of ICT to generate standardised pathology result reports, such as through mobile applications, may decrease the risk of incorrect result interpretation due to the clinician being unfamiliar with the report layout. However, the impact of such systems remains to be fully explored. Electronic test acknowledgement systems, which require the clinician to acknowledge that they have viewed a pathology result, can also be used to reduce the number of missed test results (30). Finally, electronic decision support systems can also be used in the post-laboratory phase to assist clinicians with adhering to guideline or protocol based care. However, in Australia, evidence surrounding the impact of such systems on patient outcomes remains weak (31). Therefore, while further patient centric studies are required to fully assess the impact of Health ICT on patient safety, ICT combined with EQA has the potential to reduce errors in the diagnostic phase of the pathology process.

Conclusions

For pathology services to be of value, the correct ordering and interpretation of results is crucial. Both of these factors are the responsibility of the treating clinician, and as such, can be considered as the pre- and post-laboratory phase (5). Errors occur throughout the testing process, most commonly involving test implementation and reporting results to clinicians. While significant physical harm caused by these errors is rare, adverse consequences for patients are common.

It is a recommendation of the IOM that accreditation organizations have programs in place to ensure competencies in the diagnostic phase, and to identify and learn from diagnostic errors and near misses with an aim to reduce these errors in a timely fashion (32). EQA programs are proven ways of achieving these goals and have the experience and processes in place to provide the required platforms. To this end, we believe that widespread implementation of such programs, supported by ICT, is the next stage of identifying and reducing error in the diagnostic phase of the request-result cycle.

Appendices

Appendix 1. Example report to evaluate pre- and post-diagnostic phases

The clinician would be given a scenario and asked questions, regarding either what tests they would order or what treatment they would institute. The scenario could include photos, reports, specialist comments etc. Results would be analysed against an “expert” response. The report includes the individual’s response and a summary of all responses, along with the expert’s response with the rationale and references. Participants are not “marked” as such, but responses graded according to a system along the lines of preferred, relevant, less relevant and misleading.

bm-27-73-fa

Notes

[1] Conflicts of interest None declared.

References

1 

Forsman RW. Why is the laboratory an afterthought for managed care organizations? Clin Chem. 1996;42:813–6.

2 

Report of the Second Phase of the Review of NHS Pathology Services in England. Chaired by Lord Carter of Coles; Department of Health, 2008.

3 

Runciman WB, Hunt TD, Hannaford NA, Hibbert PD, Westbrook JI, Coiera EW, et al. CareTrack: assessing the appropriateness of health care delivery in Australia. Med J Aust. 2012;197:100–5. https://doi.org/10.5694/mja12.10510 https://doi.org/10.5694/mja12.10510

4 

Alexander H, Medew J, Harrison D. Doctors’ fees under the microscope amid accusations of over-diagnosis. Sydney Morning Herald, Sydney, 2015.

5 

Plebani M. The detection and prevention of errors in laboratory medicine. Ann Clin Biochem. 2010;47:101–10. https://doi.org/10.1258/acb.2009.009222 https://doi.org/10.1258/acb.2009.009222

6 

Miller WG, Jones GR, Horowitz GL, Weykamp C. Proficiency testing/external quality assessment: current challenges and future directions. Clin Chem. 2011;57:1670–80. https://doi.org/10.1373/clinchem.2011.168641 https://doi.org/10.1373/clinchem.2011.168641

7 

Laposata M, Dighea A. ‘‘Pre-pre’’ and ‘‘post-post’’ analytical error: high-incidence patient safety hazards involving the clinical laboratory. Clin Chem Lab Med. 2007;45:712–9. https://doi.org/10.1515/CCLM.2007.173 https://doi.org/10.1515/CCLM.2007.173

8 

Testing process errors and their harms and consequences reported from family medicine practices: a study of the American Academy of Family Physicians. Qual Saf Health Care. 2008;17:194–200. https://doi.org/10.1136/qshc.2006.021915

9 

Plebani M. Quality Indicators to Detect Pre-analytical errors in Laboratory Testing. Clin Biochem Rev. 2012;33:85–8.

10 

Ramsay N, Johnson T, Badrick T. Diabetic Patient adherence to pathology request completion in primary care. Aust Health Rev (Epub ahead of print)

11 

Hickner J, Graham DG, Elder NC, Brandt E, Emsermann CB, Dovey S, et al. Testing process errors and their harms and consequences reported from family medicine practices: a study of the American Academy of Family Physicians National Research Network. BMJ Qual and Safe 2008; 17:194-200. https://doi.org/10.1136/qshc.2006.021915.

12 

Sikaris K. Performance criteria of the post-analytical phase. Clin Chem Lab Med. 2015;53:949–58. https://doi.org/10.1515/cclm-2015-0016 https://doi.org/10.1515/cclm-2015-0016

13 

Kachalia A, Gandhi TK, Puopolo AL, Yoon C, Thomas EJ, Griffey R, et al. Missed and delayed diagnoses in the emergency department: a study of closed malpractice claims from 4 liability insurers. Ann Emerg Med. 2007;49:196–205. https://doi.org/10.1016/j.annemergmed.2006.06.035 https://doi.org/10.1016/j.annemergmed.2006.06.035

14 

Skeie S, Thue G, Sandberg S. Use and interpretation of HbA1c testing in general practice. Implications for quality of care. Scand J Clin Lab Invest. 2000;60:349–56. https://doi.org/10.1080/003655100750019251 https://doi.org/10.1080/003655100750019251

15 

Thue G, Sandberg S. Analytical performance specifications based on how clinicians use laboratory tests. Experiences from a post-analytical external quality assessment programme. Clin Chem Lab Med. 2015;53:857–62. https://doi.org/10.1515/cclm-2014-1280 https://doi.org/10.1515/cclm-2014-1280

16 

Skeie S, Perich C, Ricos C, Araczki A, Horvath AR, Oosterhuis WP, et al. Postanalytical external quality assessment of blood glucose and hemoglobin A1c: an international survey. Clin Chem. 2005;51:1145–53. https://doi.org/10.1373/clinchem.2005.048488 https://doi.org/10.1373/clinchem.2005.048488

17 

Kristoffersen AH, Thue G, Sandberg S. Postanalytical external quality assessment of warfarin monitoring in primary healthcare. Clin Chem. 2006;52:1871–8. https://doi.org/10.1373/clinchem.2006.071027 https://doi.org/10.1373/clinchem.2006.071027

18 

Aakre KM, Thue G, Subramaniam-Haavik S, Bukve T, Morris H, Müller M, et al. Postanalytical external quality assessment of urine albumin in primary health care: an international survey. Clin Chem. 2008;54:1630–6. https://doi.org/10.1373/clinchem.2007.100917 https://doi.org/10.1373/clinchem.2007.100917

19 

Kristoffersen AH, Thue G, Ajzner E, Claes N, Horvath AR, Leonetti R, et al. Interpretation and management of INR results: a case history based survey in 13 countries. Thromb Res 2012;130:309-15. https://doi.org/10.1016/j.thromres. 2012.02.014.

20 

Hellemons ME, Denig P, de Zeeuw D, Voorham J, Lambers Heerspink HJ. Is albuminuria screening and treatment optimal in patients with type 2 diabetes in primary care? Observational data of the GIANTT cohort. Nephrol Dial Transplant. 2013;28:706–15. https://doi.org/10.1093/ndt/gfs567 https://doi.org/10.1093/ndt/gfs567

21 

To Err is Human: Building a safer Health System, National Academy of Sciences, Institute of Medicine report. National Academies Press, Washington, 2000.

22 

Valenstein PN. Formatting Pathology Reports. Arch Pathol Lab Med. 2008;132:84–94.

23 

Jackson R, Feder G. Guidelines for clinical guidelines. BMJ. 1998;317:427–8. https://doi.org/10.1136/bmj.317.7156.427 https://doi.org/10.1136/bmj.317.7156.427

24 

Sittig DF, Wright A, Osheroff JA, Middleton B, Teich JM, Ash JS, et al. Grand challenges in clinical decision support. J Biomed Inform. 2008;41:387–92. https://doi.org/10.1016/j.jbi.2007.09.003 https://doi.org/10.1016/j.jbi.2007.09.003

25 

Legg M. Standardisation of test requesting and reporting for the electronic health record. Clin Chim Acta. 2014;432:148–56. https://doi.org/10.1016/j.cca.2013.12.007 https://doi.org/10.1016/j.cca.2013.12.007

26 

Australian pathology units and Terminology (APUTS) Standards and guidelines (v2.2) Royal College of Pathologists of Australasia 2014. Available at: https://www.rcpa.edu.au/getattachment/2958f780-4653-4c00-862c-300a30d84841/APUTS-Standards-and-Guidelines.aspx. Accessed February 12th 2016.

27 

Georgiou A, Williamson M, Westbrook JI, Ray S. The impact of computerised physician order entry systems on pathology services: a systematic review. Int J Med Inform. 2007;76:514–29. https://doi.org/10.1016/j.ijmedinf.2006.02.004 https://doi.org/10.1016/j.ijmedinf.2006.02.004

28 

Georgiou A, Vecellio E, Toouli G, Eigenstetter A, Li L, Wilson R, et al. The impact of the implementation of electronic ordering on hospital pathology services. Report to Commonwealth of Australia, Department of Health and Ageing, Quality Use of Pathology Committee. Australian Institute of Health Innovation, University of New South Wales, Sydney, 2012.

29 

Li L, Georgiou A, Vecellio E, Eigenstetter A, Toouli G, Wilson R, et al. The effect of laboratory testing on emergency department length of stay: a multihospital longitudinal study applying a cross-classified random-effect modeling approach. Acad Emerg Med. 2015;22:38–46. https://doi.org/10.1111/acem.12565 https://doi.org/10.1111/acem.12565

30 

Georgiou A, Lymer S, Forster M, Strachan M, Graham S, Hirst G, et al. Lessons learned from the introduction of an electronic safety net to enhance test result management in an Australian mothers’ hospital. J Am Med Inform Assoc. 2014;21:1104–8. https://doi.org/10.1136/amiajnl-2013-002466 https://doi.org/10.1136/amiajnl-2013-002466

31 

Effectiveness of Computerized Decision Support Systems Linked to Electronic Health Records. A Systematic Review and Meta-Analysis. Am J Public Health. 2014;104:e12–22. https://doi.org/10.2105/AJPH.2014.302164 https://doi.org/10.2105/AJPH.2014.302164

32 

Institute of Medicine. Improving Diagnosis in Health Care. The National Academies Press. Washington 2015.