DEVELOPMENT AND IMPLEMENTATION OF A CONSISTENT POLICY FOR PATIENT IDENTIFICATION AND SPECIMEN COLLECTION By Suzanne C. Burr A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Clinical Laboratory Sciences Master of Science 2016 ABSTRACT DEVELOPMENT AND IMPLEMENTATION OF A CONSISTENT POLICY FOR PATIENT IDENTIFICATION AND SPECIMEN COLLECTION By Suzanne C. Burr Background: At Community Medical Center (CMC) disparities were identified in existing policies that could lead to failures in patient identification and specimen labeling. In 2013, CMC undertook a Healthcare Failure Mode and Effects Analysis (HFMEA) to review the practices for specimen identification and labeling that existed throughout various departments. A unified hospital policy was created that complied with the existing regulations of TJC, CAP, CLIA and the Centers for Medicare and Medicaid Services (CMS). Methods: The purpose of this project was to determine if a consistent policy on specimen collection would result in decreased mislabeling events. Data on mislabeled specimens were collected for twelve months after this policy was implemented and compared to data collected in the previous twelve-month period. The data were analyzed for statistical significance by means of a two-tailed t-test, linear regression analysis and a slopes t-test using a 95% confidence level. Results and Conclusion: The t-test returned a P-value of 0.68 while the regression analysis returned R coefficients of 0.03 and 0.09 respectively. Analysis of the slopes of the regression lines by a t-test was 0.99; far above the upper threshold of 0.05. The failure of this project to decrease errors in patient identification has spawned other efforts at CMC to decrease specimen-labeling errors. iii I am honored to dedicate this work to my eldest daughter, for all her advice and support and to my family; they never allowed me to give up. iv TABLE OF CONTENTS LIST OF TABLES ............................................................................................................ vi LIST OF FIGURES .......................................................................................................... vii KEY TO ABBREVIATIONS ............................................................................................ viii INTRODUCTION .............................................................................................................. 1 REVIEW OF LITERATURE .............................................................................................. 7 Specimen Label Regulation and Standards .......................................................... 7 Centers for Medicare and Medicaid Services ............................................. 7 The Joint Commission ............................................................................... 8 College of American Pathologists .............................................................. 9 AABB ....................................................................................................... 10 Anatomic Pathology Specimens .............................................................. 13 Specimen Labeling Policies ................................................................................ 17 Labeling Methods ................................................................................................ 18 Barcode Technology ................................................................................. 19 Radio Frequency Identification ................................................................ 20 The Total Testing Process ................................................................................. 21 Pre-Pre Analytical Error ........................................................................... 22 Pre-Analytical Error .................................................................................. 22 Analytical Error ........................................................................................ 25 Post-Analytical Error ................................................................................ 25 Post-Post Analytical Error ........................................................................ 26 Healthcare Failure Mode Effects Analysis (HFMEA) .......................................... 27 Education ........................................................................................................... 28 Quality Indicators ................................................................................................ 30 Process Improvement ........................................................................................ 31 PROJECT OBJECTIVES ............................................................................................... 34 METHODS .................................................................................................................... 35 RESULTS ....................................................................................................................... 45 DISCUSSION ................................................................................................................. 49 Limitations .......................................................................................................... 51 Recommendations .............................................................................................. 51 CONCLUSION ............................................................................................................... 53 v APPENDICES ................................................................................................................ 55 Appendix A: Results of CMCÕs Hazard Analysis ................................................ 56 Appendix B: A Q-Tracks Survey Report from the College of American Pathologists for Community Medical Center ....................................................... 67 Appendix C: Glossary of Terms ......................................................................... 76 WORKS CITED .............................................................................................................. 78 vi LIST OF TABLES Table 1: Summary of CMCÕs Identification Policies ........................................................ 4 Table 2: Findings of HFMEA Hazard Analysis !!!!...!!!!!!.!...!....!......5 Table 3: Pathology Q-probes Error Summary ............................................................... 16 Table 4: Wristband Error Rate in 1999 and 2000 .......................................................... 19 Table 5: Relative Frequency (%) of Errors Occurring During the Total Testing Process .......................................................................................................................... 22 Table 6: Disparate Identification Practices within CMC Identified by FMEA Team .............................................................................................................................. 35 Table 7: Summary of the Number of Mislabeled Specimens from 2013 to 2015 .......... 45 Table 8: Step One of CMCÕs Hazard Analysis .............................................................. 56 Table 9: Step Two of CMCÕs Hazard Analysis .............................................................. 57 Table 10: Step Three of CMCÕs Hazard Analysis .......................................................... 58 Table 11: Step Four of CMCÕs Hazard Analysis ............................................................ 59 Table 12: Step Five of CMCÕs Hazard Analysis ............................................................ 60 Table 13: Step Six of CMCÕs Hazard Analysis...............................................................61 Table 14: Step Seven of CMCÕs Hazard Analysis ......................................................... 62 Table 15: Step Eight of CMCÕs Hazard Analysis ........................................................... 63 Table 16: Step Nine of CMCÕs Hazard Analysis ............................................................ 64 Table 17: Step Ten of CMCÕs Hazard Analysis ............................................................. 65 Table 18: Step Eleven of CMCÕs Hazard Analysis ........................................................ 66 vii LIST OF FIGURES Figure 1: Screenshots of Midas Reporting System ....................................................... 42 Figure 2: Graph of % of Mislabeled Specimens from 2013 to 2015 .............................. 46 Figure 3: Graph of Linear Regression Analyses of Pre and Post Policy S-01 Implementation Data ...................................................................................................... 48 Figure 4: Q-Tracks Survey Quality Management Report .............................................. 67 Figure 5: Q-Tracks Executive Summary ....................................................................... 68 Figure 6: Q-Tracks Individual Data Summary ............................................................... 69 Figure 7: Q-tracks External Comparison Report ........................................................... 70 Figure 8: Q-Tracks Cumulative External Comparison Report ....................................... 71 Figure 9: Q-Tracks Trend Analysis Report .................................................................... 72 Figure 10: Q-Tracks Peer Characteristics Report ......................................................... 73 Figure 11: Q-Tracks Quarterly Historical Data Report .................................................. 74 Figure 12: Q-Tracks Quarterly Summary Report .......................................................... 75 viii KEY TO ABBREVIATIONS AABB Originally, the American Association of Blood Banks now just AABB ASCP American Society of Clinical Pathologists CAP College of American Pathologists CDC Centers for Disease Control CFR Code of Federal Regulations CLIA Clinical Laboratory Improvement Act CMS Centers for Medicaid and Medicare FDA Food and Drug Administration HFMEA Healthcare Failure Mode Effects Analysis HIS Hospital Information System IFCC International Federation for Clinical Chemistry LAP Laboratory Accreditation Program LIS Laboratory Information System NPSG National Patient Safety Goals PPID Positive Patient Identification QA Quality Assurance QI Quality Improvement RCA Root Cause Analysis ix RFID Radio Frequency Identification TJC The Joint Commission TTP Total Testing Process WG-LEPS Working Group on Laboratory Errors and Patient Safety 1 INTRODUCTION To Err is Human, a report produced by the Institute of Medicine in 1999, shocked the nation with its revelation of alarmingly high rates of medical errors that existed in U.S. hospitals1. Studies conducted in Colorado, Utah and New York found that deaths due to medical errors ranged from 44,000 to 98,000 per year based on 33.6 million hospital admissions1 This report led to the passage of the Patient Safety and Quality Improvement Act of 2005 and the appropriation of $50 million by Congress to improve patient safety in our healthcare system2. The subject of medical errors was revisited ten years later only to find the situation had deteriorated and the error rate had increased2. According to the 2008 National Healthcare Quality Report, an annual report mandated by Congress in the Healthcare Quality and Research Act of 1999, one out of seven Medicare patients could be expected to experience at least one adverse event2. In 2014, the National Healthcare Quality Report and the National Healthcare Disparities Report were merged into one report called the National Healthcare Quality and Disparities Report to evaluate our healthcare system in the context of the quality of care received by the general population and identify the disparities in care across racial, economic and ethnic groups. In 2014, the key findings of the National Health Quality & Disparities Report were that both access to care and the quality of care, as measured by the key indicators of safety, patient centered care, effective treatment and healthy living had improved. Disparities still remained in that access to care that was unequally distributed based on household income3. 2 Medical errors can take many forms; here we focus on mislabeled specimens, which comprise a significant portion of medical errors. Blood, non-blood and surgical specimens comprise the majority of samples encountered in the clinical laboratory. While there is labeling risk in all of these categories, each one also comes with special considerations. Blood collections include blood bank specimens while non-blood specimens, such as urines and cultures can have specific collection requirements. Surgical specimens can be irretrievable. Each category has special prerequisites in terms of labeling requirements and the risks they pose to patients. In addition to unique collection requirements, each type of specimen also has specific labeling errors that can be associated with it. For example, the label may not match the requisition, which is a possibility in all categories. An error specific to blood bank specimens would be not associating a blood bank specimen with the patients historical blood type. For non-blood specimens, such as cultures, the material may not be collected in the most appropriate manner or container. The labeling of surgical specimens is specific to collection site and errors here can be related to laterality. This project began with a decision by the laboratory at Community Medical Center (CMC) to make a concerted effort to log all mislabeled specimens into the hospital incident reporting system, called MIDAS+. This action on the part of the laboratory had the desired effect of focusing attention on the problem of mislabeled specimens and revealed fifty-nine mislabeled specimens during the first half of 2013. The Joint Commission (TJC) requires its accredited institutions to conduct a proactive risk assessment every eighteen months. Since CMC is TJC accredited, the 3 administration used this requirement as an opportunity to conduct a Healthcare Failure Mode Effects Analysis (HFMEA) on mislabeled specimens. A HFMEA is a proactive process used to assess risk before a failure occurs. The process is described in detail below. To begin the HFMEA, a team was assembled that included representatives from all departments of the hospital where specimen collections were performed. The team met from July 2013 through December 2013 and the Laboratory Director served as team leader with the Director of Standards and a Quality Coordinator from Quality Resource Services as co-leaders. Representatives from the following departments described their specimen labeling processes: ¥ Emergency Department ¥ Inpatient Units o Nurse Collection o Lab Collection o MD Collection ¥ Respiratory Department ¥ Operating Room ¥ Laboratory ¥ Interventional Radiology From these descriptions, it became clear that there were numerous specimen collection policies in use throughout the facility (Table 1). In addition, different patient identification scenarios were described when blood is collected from a central line, an MD collects a specimen, or a non-blood specimen is collected. The emergency room 4 alone described eight scenarios with a different patient identification process for each one. Table 1: Summary of CMCs Identification Policies Department Policy Pathology Patient Identification Pathology Specimen Identification Nursing Obtaining Blood Specimen by Venipuncture Nursing Specimens Blood Bank Labeling and Identification Table 1: Specimen labeling policies found to be in effect at the start of the HFMEA. (CMC policy manuals) Team members brought the policies listed in Table 1 to the HFMEA meetings as each department described their specimen collection procedure in detail. The policies listed in Table 1 were in use at the time of the HFMEA; each addressed an aspect of specimen collection and patient identification. The new policy was devised to consolidate these disparate policies and serve as a replacement. The HFMEA team used the five-step HFMEA process, described below, to examine the risk in the specimen labeling processes that were described by each department. The departments with the highest levels of risk were identified during hazard analysis and addressed in the new policy. The failure modes and the potential causes for these errors are presented in Table 2. 5 Table 2: Findings of HFMEA Hazard Analysis Failure Mode Potential Cause Specimen order Order on wrong patient Incorrect order Obtain necessary equipment for specimen collection Wrong labels obtained Complete patient ID by comparing labels to wristband ID policy not followed ID not done Obtain specimen Specimen obtained on wrong patient Label specimen at the bedside Specimen not labeled at bedside Specimen labeled with wrong label Label contains required elements Wrong label Wrong patient Table 2: A summary of the findings of the HFMEA committees hazard analysis. (From CMCs HFMEA meetings in 2013) The failure modes and potential causes were examined during the hazard analysis conducted by the HFMEA team and used to define a comprehensive specimen collection process that was developed into policy S-1, Specimen Labeling. The failure modes listed in Table 2 were addressed within policy S-1, which describes a process by which labels are matched to a patients wristband at the bedside and affixed to the specimen in the presence of the patient. Policy S-01 was signed and introduced at the end of February 2014. Staff education took place during March and April 2014. A computer based learning module had been discussed; however, the policy was distributed to department directors to educate their staff members about the new policy. In order to assess the efficacy of the implementation of the consolidated specimen labeling process, the number of 6 mislabeled specimens reported to Midas+ from May 2013 through April 2015 was analyzed. 7 REVIEW OF LITERATURE Specimen Label Regulations and Standards Requirements for specimen labels have been formalized using standards adopted by College of American Pathologists (CAP) and TJC, which are based on regulations from Centers for Medicare & Medicaid Services (CMS). The administration of the Clinical Laboratory Improvement Act of 1988 (CLIA 88) is the joint responsibility of CMS, Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA)4. In order for a clinical laboratory to receive reimbursement from the Medicare and Medicaid programs, it must successfully pass proficiency testing and undergo a biennial inspection. Individual states are authorized to conduct CMS inspections or CMS may grant deemed status to an organization in acknowledgement that its standards meet or exceed those of CMS. Clinical laboratories may choose to be accredited by these organizations, enroll in their proficiency testing programs and participate in their inspections. Centers for Medicare and Medicaid Services The statutes that describe the requirements for specimen labeling are CFR493.1232 and CFR493.1242, which are contained within the Code of Federal Regulations (CFR) under Title 42, subpart K; also known as the CLIA regulations5. These statutes outline the responsibility a laboratory has to formulate and adopt policies to ensure patients are properly identified and their specimens handled in a manner that ensures their integrity. They describe in detail the specimen labeling process, which 8 requires the name along with a unique identifier, source, date, time collected and time received. The Joint Commission The Joint Commission is a not for profit organization that operates independently to accredit health care organizations in the United States. Founded in 1951, it currently certifies over 20,000 health care programs, making it the largest and oldest accrediting organization in the United States. Certification by TJC is not mandatory, but it does confer the distinction that the healthcare facility has met its rigorous standards and has reciprocity status with CMS for reimbursement. In its mission statement, TJC makes a commitment to the continuous improvement of quality and to provide safe and effective care to the public through the evaluation of healthcare organizations6. The Joint Commission publishes accreditation standards called the National Patient Safety Goals (NPSG) that focuses on making patient safety a priority. The standards that address patient identification and specimen labeling are found in this section. NPSG.01.01.01 defines the requirements for patient identification; at least two unique identifiers are required and the standard goes on to list acceptable patient identifiers as patient name and medical record number, date of birth or phone number. They also specify when the identification process is to be used for the collection of blood samples and other specimens7. TJC has placed proper patient identification at the top of their National Patient Safety Goals, implying that it is to be considered of paramount importance. 9 College of American Pathologists CAP is an advocacy organization for board certified pathologists formed in 1946. The focus of the group was defined by a committee of the American Society of Clinical Pathologists that focused on laboratory improvement and accreditation8. The organization now consists of approximately 7,600 accredited laboratories with more than 20,000 laboratories enrolled in their proficiency testing programs9. CAP is designated an approved accrediting organization as the CLIA 88 regulation for proficiency testing allows, which means its Laboratory Accreditation Program (LAP) requirements are at least as rigorous and comprehensive as those of the CMS10. A laboratory can choose to become a member of CAP and participate in LAP in lieu of CMS or state inspections11. All CAP accredited laboratories adhere to the Standards for Laboratory Accreditation, consisting of four standards that form the core principles of the program12: Standard I - Director and Personnel The laboratory director and staff members must meet CAP qualifications. Standard II - Physical Resources The laboratory must have the means to support the activities of the laboratory and ensure restricted access to guarantee privacy for patients. Standard III - Quality Management The laboratory must have policies and procedures in place to ensure quality is maintained during all phases of testing and reporting. 10 Standard IV - Administrative Requirements The laboratory must submit to periodic inspections and agree to the Terms of Accreditation. Adapted from About the CAP13 These standards are applied in the laboratory accreditation checklists and form the basis for the biennial peer inspection. All sections of the laboratory must meet the standards of the General and the All Common Checklists as well as the Checklist for specific sections of the laboratory. Specimen labeling requirements are found in the Laboratory General Checklist. They state that the patient must be positively identified before the specimen is collected and the specimen container must be labeled at the bedside with at least two unique identifiers. Blood bank specimens must be labeled with the patients first and last name, unique identifier, date, and identification of the person collecting the specimen12. AABB The Blood Bank must not only pass a CAP inspection, but is also subject to periodic inspections by the state, the FDA and the American Association of Blood Banks (AABB). The AABB Technical Manual references TJCs National Patient Safety Goals in its chapter on Pre-transfusion Testing14. It defines the elements of patient identification as two independent patient identifiers consisting of: 1. Patient name, first and last 2. Unique Identifier a. Medical Record Number b. Birthdate 11 c. Drivers license number d. Photographic ID Both the requisition for transfusion and the specimen must contain the same information. The person collecting the specimen must identify the patient and confirm the accuracy of the identification information before specimen collection. Methods of identification may be devised according to institutional needs but the specimen must be traceable back to the phlebotomist. The following requirements must be fulfilled when a specimen is collected:14 1. Label the specimen while the patient is present 2. Place two patient identifiers on the specimen 3. Date and sign the label 4. Confirmation is made by laboratory staff that requisition and specimen label match before the specimen is processed. Adapted from the AABB Technical manual14 CAP sponsors a series of voluntary surveys called Q-probes and Q-tracks, which can function as quality improvement initiatives. These surveys provide a quantitative measure of errors and are compared to a peer group that is based on reported institutional characteristics15. During a Q-probes analysis of blood bank specimen labeling practices, 122 clinical laboratories reviewed all inpatient and outpatient labels on specimens submitted for blood bank testing during a thirty-day period. The combined mislabeled specimen rate for participating institutions was 1.12%. All of the institutions had a policy that defined rejection criteria for blood bank specimens and 94.8% of the facilities had a specific procedure to label blood bank specimens16. Grimm 12 et al. (2010) found that an increased mislabel rate was associated with specimen collection by nonlaboratory personnel, when institutional policy allows armband replacement and a policy to require submission of a new sample when a name change occurs during admission. A lower mislabel rate was associated with the requirement of location, date of birth, and gender for both label and test requisition and for outpatient test requisition16. These findings may be the result of disparate policies in use by institutions. For example, all participating institutions had a policy outlining criteria for acceptance of blood bank specimens; however, 60% allowed exceptions to this policy and 25% allowed the relabeling of specimens. Establishment of benchmarks by which facilities can measure their performance along with evidence based best practice guidelines may help to standardize the collection of blood bank specimens16. In another analysis of blood bank specimen identification practices, Maskens et al. (2013) reported on a study that was conducted from 2005 - 2010 at a large tertiary care center in Toronto17. Data were collected in the Transfusion Error Surveillance System and analyzed over a five-year period. Three areas relating to mislabeled specimens were analyzed and the noncompliance rate reported17. Label incomplete for patient identifiers 3.0% Sample not labeled 2.7% Sample labeled with wrong patient information 2.5% Adapted from Maskens et al., 201317 To decrease the mislabel error rate, handheld barcode devices were implemented along with the practice of rechecking the ABO group17. The report did not address the difference these measures made in the error rate. Collection of data to 13 document the success rate of efforts to decrease labeling errors can be informative. It is clear from these studies that opportunities for improvement exist in the blood bank. Anatomic Pathology Specimens Pathology specimens are subject to the same labeling requirements as blood specimens; however, they incur additional opportunities for mislabeling. Errors may occur when the specimen is collected and placed in the container, when pathology staff members accession the specimens, when the block is made, cut and slides prepared and when the pathologist examines them. Each time a specimen is handled during the total testing process; there is an opportunity for a labeling error to occur. In the pathology department, the specimen is divided and each block and slide labeled individually, this process affords multiple occasions for labeling mistakes. Bixenstine et al. (2013) identified the following pre-analytical areas as prone to misidentification and have developed quality measures to track and identify areas for improvement18. ¥ Container defects o Missing specimen there is no specimen in the container or the requisition is received without a specimen o Label misplaced or missing label is illegible or absent on container o Patient identification missing or incorrect container does not match the name on the requisition or the name is missing on the specimen container o Patient identification number missing or incorrect container does not 14 match the number on the requisition or is missing on the specimen container o Source or type missing or incorrect source on the container does not match the requisition or is absent o Laterality missing or incorrect right or left designation on the container does not match the requisition or is absent Adapted from Bixenstine et at. 201318 Requisition defects Requisition missing or blank - specimen is received without a requisition Date or time missing or incorrect - requisition is incomplete or incorrect when compared to the container Patient name on requisition missing or incorrect - patient identification on the requisition is absent or does not match the name on the container Patient identification missing or incorrect - medical record number on the requisition is absent or does not match the container Source or type missing or incorrect - specimen type on the requisition is absent or different from that on the container Laterality missing or incorrect - right or left designation on the requisition does not match the container Adapted from Bixenstine et al. 201318 15 Data on mislabeled specimens that fit the categories described above were collected over a period of three months. The overall rate of error reported was 2.9%, container defects were 1.2% while the error rate for requisition defects was reported to be 2.3%. This study is notable in that it considered only the error inherent in the pre-analytical phase, defined to be from collection in the operating room to delivery in the laboratory. It was conducted at 69 facilities and all used the standardized rejection criteria. The results represent a significant risk to patient safety. Surgical specimens that are incorrectly identified can lead to incorrect therapy and adverse events18. In order to devise a plan to reduce these types of errors, laboratory staff must work in conjunction with care providers. Institutions need to make collaboration a priority as errors may lead to serious consequences. In another effort to focus on specimen identification, a Q-probes survey was conducted to examine the error rate that exists in surgical pathology. In 2009, 136 institutions subscribed to a Q-probes survey in which the participants examined specimens for eight weeks or until thirty errors had been identified19. The following categories were used to describe the errors that were found: Mislabeled case - the accession number of the case was incorrect Mislabeled specimen - incorrect laterality or specimen site labeled on the container Mislabeled histological block - incorrect specimen in block or incorrect label or sequence number on the block Mislabeled histological slide - incorrect name, ID number, sequence number or 16 letter designation on the slide Adapted from Nakhleh et al19 Incidences of error are outlined in the table below: Table 3: Pathology Q-probes Error Summary Type of Error Number of Errors Number Reviewed % Error Mislabeled case 490 427,255 0.11 Mislabeled specimen 796 774,373 0.1 Mislabeled block 2172 1,304,650 0.17 Mislabeled slide 2509 2,261,811 0.11 Table 3. A summary of the amount and types of errors that occurred at 136 institutions during the study period. Adapted from Nakhleh et al. 201119 The percentage rate of error detection was similar throughout the process as shown in Table 3. The authors found that most study participants had quality checks at the transition points of accessioning, gross processing and block labeling, tissue cutting and slide mounting that were effective at leading to error detection19. If an error was made, it was detected in subsequent steps in most cases. The consequence of undetected errors had an effect on patient care in 24 cases or 1.3%.19. There are many opportunities for improvement within the anatomical pathology laboratory because it is a complex process that takes a single specimen and breaks it into many parts for examination by a pathologist. Adjusting workflow can be a means of improving the defect rate due to mislabeling errors. The Department of Pathology at the Henry Ford Hospital in Detroit, MI adapted lean processes to decrease mislabeling incidents20. They modified their workflow to print all the barcode labels for a case when 17 it was accessioned creating a positive identification workflow process for the specimen20. They also found that adding a barcoded label that can withstand the staining process enabled the histology technicians to label slides directly at the microtome workstation, which eliminated the step of matching handwritten slides to labels after they had been stained. These modifications to the workflow routine reduced the slide misidentification rate of overall surgical cases from 1.67% to 0.63%20. Specimen Labeling Policies Statute 42 CFR493.1232 states there must be a consistent policy in place to identify patients before specimens are obtained. Since no single method is required, healthcare organizations have adapted methods that are suited to their needs and means. These methods range from checking wristbands to the use of handheld barcode label printers and radio frequency identification technology. All methods are successful when staff members adhere to the written policy5. Each healthcare organization must adopt its own specimen labeling and acceptance policy but there are many ways of complying with the regulation. A comprehensive policy should contain the following elements: Reason for inclusion of the requirement Specimen acceptance guidelines Criteria for specimen rejection Adapted from CAP Today Feb201021 The policy must contain clear and precise guidelines as to when a new specimen needs to be requested and should describe the patient identification process as one that uses two unique elements to identify a patient before obtaining a specimen. Situations 18 requiring a new specimen request include: Specimen/requisition discrepancy Unlabeled specimens Specimens without two unique identifiers - patient name and ID number, date of birth, or other unique number as specified in the policy Adapted from CAP Today Feb201021 Policy elements must also include guidelines for specimens that are deemed irretrievable, such as surgical specimens and cerebrospinal fluid. Procedures put in place to accept these specimens can include labeling by clinical personnel involved in collection, keeping a log of specimens and staff members affirmation of specimen identity and attaching a disclaimer to the report21. Blood bank policies are more demanding and may include additional requirements for their specimens. Requirements may include the following: A prerequisite that all tubes are hand labeled from the patients wristband at the bedside Inclusion of phlebotomist identification, as well as date and time of draw Prohibition on the relabeling of specimens Two separately drawn specimens for patients without a previous ABO history Adapted from CAP Today January 200922 Labeling Methods Matching patient wristbands to test request is a simple method that requires no special equipment. A Q-tracks program was initiated to monitor wristband errors and reported the results for a two year period15. Phlebotomists examined wristbands for errors and the findings were reported for a total of 217 institutions15. Six types of errors 19 were categorized: no wristband, wrong wristband, multiple different wristbands on one patient, incomplete wristbands, erroneous wristband information, and illegible wristbands15. The wristband error rates for 1999 and 2000 are summarized in Table 4 below. Table 4: Wristband Error Rate in 1999 and 2000 Wristband Error Rate Percentile 10th 90th 1999 11.43% 0.30% 2000 9.21% 0.28% Table 4: Wristband error rates reported by participating institutions. The bottom 10% of institutions, the 10th percentile, had a higher error rate than the top 10% of institutions, the 90th percentile.15 Participating institutions reported modification of practices that led to improvement in the second year of the study. The adoption of a policy whereby phlebotomists were instructed to refuse to draw blood from patients with incorrect or missing wristbands was found to be the most effective method of improvement15. Year over year performance from 1999 to 2000 showed improvement in the wristband error rate; this indicated that monitoring performance had a positive effect on the outcome. Patient identification is an area where laboratories must take an interdisciplinary approach and cooperation with clinical staff is necessary to achieve improvement. Barcode Technology Technology has evolved that can make the specimen labeling process less error prone. Positive patient identification can be accomplished utilizing a wireless barcode scanner with printer and a patient wristband designed with a barcode that is scanned at 20 the bedside. Labels are printed for the orders that have been put into the system for that patient; these labels are also bar coded and scanned at bedside after the specimen is obtained. The laboratory scans the tubes upon receipt of the specimens23. Data collected after implementation of this system compared to that collected before implementation in annual increments found errors decreased from 103 per year to 8 per year when the two periods were compared23. A similar system was implemented at a pediatric oncology center with similar results, the rate of mislabeled specimens fell to 0.005% from 0.032% following implementation of a positive patient identification and barcode system24. These efforts illustrate the effect the proper application of technology can have on an error prone process such as specimen labeling. Appropriate use of this technology by trained staff members can improve patient identification to nearly perfect. Radio Frequency Identification Radio frequency identification (RFID) is another method that can be employed as a means of specimen identification. This process consists of attaching a sticker that utilizes a high frequency radio range to transmit data. Placement of the tag onto specimen containers provides a means of positive identification. This process was instituted at an outpatient endoscopy facility along with paperless requisition and the adoption of a two-person site identification policy. Data were collected and compared for a three-month period before and after installation of the RFID system and the policy change. In 2007, a total of 8321 specimens resulted in 765 labeling errors versus 8539 specimens with 47 labeling errors in 200825. Francis et al. (2009) incorporated three changes simultaneously to improve specimen labeling; RFID embedded labels, paperless requisition and two-person site identification policy. The combination of 21 process improvements can be credited with the progress made in decreasing specimen-labeling errors. In addition to the personal aspect of mislabeled specimens, there is also the financial aspect. It has been estimated that the average cost of a mislabeled specimen is $712. Applying an average misidentification rate of 390 per million tests identified by Valenstein, the cost is nearly $280,000 per million tests26. The Total Testing Process Activities of the laboratory that are directed toward the production of test results have been termed the total testing process. Dr. George Lundberg described the process as the brain to brain loop27. The concept began with an idea a physician had to order a laboratory test. Nine steps proceeded from this idea; order, collection, identification, transportation, preparation, analysis, report, interpretation and action28. These steps were originally condensed into three phases; pre-analytical, analytical and post-analytical29. The pre-analytical and the post-analytical phases have been split into two components; the pre-pre-analytical and the post-post-analytical. This terminology has not yet been universally accepted and overlaps with the traditional designations30. Unless noted, data cited in this paper have been analyzed using the original three phases of testing. Descriptions of the additional phases have been provided as they offer a more comprehensive view of the total testing process. Table 4 illustrates the distribution of error during the total testing process. The majority of errors are seen during the clinical stage of testing, an area not within the direct control of the laboratory. According to Plebani et al. (2010), pre-pre-analytical 22 and post-post-analytical error account for a relative error of approximately 71%, while errors in the pre-analytical, analytical and post-analytical portions of the process occur at a relative frequency of approximately 22.5%. If the laboratory is to make progress in the reduction of errors that occur during the total testing process, an interdisciplinary approach that includes clinical staff members will be necessary30. Table 5: Relative Frequency (%) of Errors Occurring During the Total Testing Process Pre-pre-analytical 46 - 68.2% Pre-analytical 3.0 5.3% Analytical 7.0 13.0% Post analytical 12.5 20.0% Post-post analytical 25.0 45.5% Table 5: The distribution of error in each phase of the total testing process. Adapted from Plebani et al. (2010)30 Pre-Pre Analytical Error The pre-pre analytical phase is the conceptual phase of ordering a test. Test selection by the physician is part of this process30,31. These activities usually occur outside the laboratory and are not performed by technical personnel. Proper test selection by physician and input of the order into the LIS are part of the pre-pre analytical process30,31. Pre-Analytical Error The pre-analytical stage consists of collection, identification, transportation and preparation of laboratory tests32. The pre-analytical phase begins with an order for a laboratory test and ends when the specimen is placed on the analyzer. Errors within 23 the pre-analytical stage encompass mislabeled specimens. In a Q-probes study conducted by CAP in 2007, 147 participating organizations submitted data about mislabeled specimens. When analyzing data from this study, Wagar et al. (2008) defined specimen identification in the following manner: Mislabeled specimen - one or more incorrect patient identifiers Unlabeled specimen - received without a label Partially labeled specimen only one correct patient identifier Incomplete requisition request did not contain two correct patient identifiers Incomplete specimen label contained two correct patient identifiers, but lacked other required information such as date, time, phlebotomist, type of specimen, sex of patient Illegible specimen required information could not be read Correctly labeled specimen - two patient identifiers along with other required elements included on a label that was attached to the specimen and accompanied by the appropriate test request Adapted from Wagar200833 Specimen types included those from hematology, chemistry and coagulation. Labels were reviewed and a total of 3,043 errors were identified from 3,324,888 labeled specimens. Labeling errors averaged 0.92 per 1,000 specimens, at the 10th percentile there were 52.27 errors per 1,000 labels, while at the 90th percentile there were 0.22 errors per 1,000 labels. Wagar et al. (2008) found the best performing laboratories had a dedicated phlebotomy team and used quality monitors for mislabeled specimens in 24 addition to the Q-probes study in which they participated33. These findings describe good practices a laboratory can use to monitor the performance of their high-risk processes. A dedicated phlebotomy team will be more focused on patient specimen identification as their primary task than will members of the nursing staff who are responsible for many other aspects of patient care. An ongoing review of specimen labeling performance can help a laboratory concentrate on where improvement is needed. Both of these activities can help an institution improve their patient safety profile. Identifying the reasons specimens get mislabeled can lead to a program of process improvement. In order to determine why specimens get mislabeled, the Veterans Administration conducted a qualitative analysis of 227 root cause analyses (RCA) that involved adverse events in the clinical laboratory. Out of the 227 RCAs, 150 events occurred during the pre-analytical phase and 96 of those involved mislabeled specimens. Reasons for the mislabels during the pre-analytical phase included: batching labels and specimens, failure of the two identifier process, failure of the two person identification process for blood bank specimens, errors on laboratory forms and specimens without labels. Root cause analyses focused on the reason for the adverse event and the steps to be taken to prevent their reoccurrence34. After analysis of the RCAs, the authors made the following recommendations to decrease mislabeling events in the pre-analytic phase: adoption of wireless bar code technology, use of unique patient identifier to select medical record, use of electronic forms to eliminate 25 manual specimen labeling and relabeling of specimens, and implementation of a centralized phlebotomy team34. Analytical Error The analytical phase of testing is confined to the period of time the specimen spends on the analyzer and is considered to be less error prone32. It has been estimated that performance in this area approaches five sigma or five standard deviations from the process mean, which represents an error rate of just 0.002%30. Types of analytical error include analyzer malfunction, sample mix up, undetected quality control failure and interference from substances within the sample31. Improvements in technology can be credited for reducing errors in this area. Post-Analytical Error The post-analytical phase includes test result reporting and the management of critical values which begins with transmission of the result to the LIS and ends with the receipt of the result by the physician32. Post-analytical error is considered to occur within the steps that include acceptance of the result from the analyzer to documentation on the laboratory report. Some error prone activities that occur during this process include the production of the laboratory report, critical value report and manual result entry35. It is at this point that a mislabeled specimen may be erroneously validated because there may be no patient history or delta failure, a comparison of previous results with current results. The widespread adoption of laboratory instrumentation that interfaces with the LIS to automate result reporting has significantly decreased errors in this phase of the total testing process. In terms of sigma metrics, types of measurements used to assess the quality of a product, error in this phase of 26 laboratory testing is 0.0477% or 4.8 sigma36. Many of the error prone activities now reside in the post-post analytical process. Post-Post Analytical Error The post-post analytical phase is the stage in which the information generated by the laboratory is interpreted by the physician and used for patient management31,37. In this phase, incorrect interpretation by the physician and failure to communicate abnormal and actionable results to patients contribute to medical error. Lack of documentation of patient communication and follow up can further add to adverse outcomes for patients30. Mislabeled specimens can be associated with adverse events that involve unfavorable patient outcomes. A Q-probes survey conducted at 120 institutions in 2005 studied the relationship between labeling errors and patient outcomes. All identification errors were tracked for five weeks and put into two categories, those detected before or after result verification. Valenstein et al. (2006) defined identification error as a result that would potentially be reported on the wrong specimen. A total of 6,705 errors were identified, 85.5% prior to verification and 14.4% afterward. Adverse events were reported in 345 instances without mortality. The authors extrapolated their findings to infer 160,900 adverse events occur per year in our nations hospitals38. Every institution that collects and processes patient specimens contributes to this unacceptable number of adverse events. It is up to each of them to monitor their pre-analytical error rate on an ongoing basis and use that information to design processes to make their practices safer. 27 The effects of mislabeled specimens are detailed in the harm they cause to patients and range from the inconvenience of having to recollect the specimen to the tragedy of undergoing treatment for the wrong diagnosis. There is also the possibility of a patient failing to get necessary treatment due to specimen mislabel. In any case, the consequences may be irreversible26. Healthcare Failure Mode Effects Analysis (HFMEA) A Failure Mode Effects Analysis (FMEA) is a proactive tool to assess risk in a process; it is a systematic approach that is utilized to assess a process and identify weak points before a breakdown occurs. It was originally developed by the U.S military in 1949 to assess the effect of system failures, adopted by NASA in the 1960s to prepare for space missions and found its way into automobile manufacturing in the 1970s24,39. The FMEA was adapted for the healthcare industry by the Veterans Administration National Center for Patient Safety soon after its founding in 1999 and designated the HFMEA40. There are five steps that must be followed when performing a HFMEA40. 1. Define the topic: The high-risk process to be studied is specifically defined and the scope of the project is established. 2. Assemble a team: A multidisciplinary team approach is favored. There should be at least one member with expert knowledge of the subject, as well as novices to the field. In addition, consultants can be called upon as needed. 3. Describe the process: Team members design a flow diagram of the process. The steps are described and numbered consecutively. Sub-processes are identified 28 and added to the flow diagram. 4. Carry out a hazard analysis: The team looks at each process and sub-process, and then identifies the risk in each by listing all possible failure modes. As each failure mode is identified, it is scored as to probability and severity. This is done using a Scoring Matrix and Decision Tree. The Scoring Matrix helps to determine the probability of occurrence and severity of the potential failure. The Decision Tree aids in deciding whether corrective action should be applied. 5. Determine outcomes and action measures: Action measures are designed to prevent a potential failure and make the process more robust. Outcome measures are used to determine the success of the redesigned process. Action measures and outcome measures are paired and an individual is designated to be responsible for its completion. Adapted from Derosier 200240 In order to improve a process, such as specimen labeling, the areas in need of improvement must be identified. CMC undertook an HFMEA to study the areas of risk that resided within their specimen labeling practices. A committee, representing all clinical departments involved with specimen labeling, was formed in response to unacceptably high rates of mislabeled specimens. The team members used the process outlined above to identify the areas of risk in specimen labeling and a single comprehensive specimen labeling policy was recommended to administration based on its findings. Education Efforts at education to prevent mislabeled specimens have been met with varying 29 degrees of success. Kemp et al. (2012) reports on an effort in which posters were displayed to make staff aware of the types of labeling errors, as well as the potential costs41. Posters were hung around the facility and followed up with educational sessions that emphasized proper protocol. To reinforce the theme, screensavers were designed for computers on the wards41. Interestingly, these efforts did not produce a reduction of the number of mislabeled specimens. The authors came to the conclusion that human error such as failure to choose the proper collection containers played a large part in the failure of the interventions; thus, they felt automation could decrease errors by limiting the opportunity for human interaction.41. More successful efforts included a one-day safety summit that focused on the participation of frontline staff members. During the intensive one-day forum, participants used a FMEA to identify the risk in the labeling process and design solutions to ameliorate that risk. Involvement of frontline staff encouraged them to take ownership of the problem and led to implementation of their solutions. Weekly control charts were kept that outlined the number of occurrences, as well as the time between events. The time between incidents increased which translated into a decrease in mislabeling errors42. The most effective educational venture also featured clinical and phlebotomy staff members. This project was the implementation of a low cost intervention that required the phlebotomist or nurse to repeat audibly the last three digits of the patient identification number as the label on the tube is matched with the information on the wristband43. This led to a 90% decrease in mislabeled specimens in ninety days. The improvement has been maintained and the project has been expanded to ten 30 hospitals43. This program has been posted online for anyone to implement at www.thefinalcheck.org. At CMC, the efforts focused on the introduction of a standardized specimen labeling policy. The Education Department created a computer based learning module and all staff members who collect specimens were required to complete the module. Data from before and after implementation were collected and analyzed to ascertain if there was improvement in this area. Quality Indicators Quality assurance programs have been developed to address the issue of labeling errors and to measure improvement. The International Federation of Clinical Chemistry (IFCC) Working Group on Laboratory Errors and Patient Safety has developed a set of Quality Indicators (QI) and Quality Specifications (QS) to measure all phases of laboratory activity independent of facility32. This effort was meant to standardize the measurement of quality in the total testing process44. The quality indicators were formulated in an effort to create a common reporting system and to standardize data collection irrespective of the size or scope of the institution32. The quality indicators that address the pre-analytic phase of the total testing process include specimen labeling. This phase has been separated into three components: Test Order Test Request Specimen identification, collection, handling and transport Adapted from Sciacovelli et al., 201132 31 Quality indicators were assigned to each category listed above. The indicator for specimen labeling was designated as the number of mislabeled specimens divided by the total number of specimens and expressed as a percentage. These indicators were developed by consensus of the participating laboratories with the goal of identifying areas for improvement and the formulation of preventive actions32. Data on specimen mislabeling incidents were collected to determine the effectiveness of the CMC specimen labeling policy and the information was used as part of an institutional program of quality improvement. Data from before implementation were compared to post implementation data with the intended endpoint being a decrease in mislabeled specimens. This indicator served as one metric in our performance improvement program, as all rejected specimens are tracked and grouped by reason for rejection, area of collection and manner in which they were collected. Process Improvement Once problem areas have been identified, the process should be scrutinized and improved. Interventions should be devised that address the problem and not just offer a quick fix. The specimen labeling process can be improved by eliminating steps that are subject to non-cognitive human error, such as batch printing labels prior to venipuncture and entering orders into the LIS when the specimen is received in the lab. These tasks can be automated through adoption of a handheld barcode reader system interfaced with computerized physician order entry. Use of an automated specimen process can further reduce error45. UCLA Medical Center did a thorough review of their pre-analytic process and instituted the following improvements: 32 Implementation of twenty-four hour phlebotomy team Use of online event reports Acquisition of an automated specimen processor Adoption of barcode label technology Adapted from Wagar et al., 200745 The combined effect of these interventions decreased specimen-labeling errors to less than 0.1%. Subsequent error tracking by the UCLA Medical Center found that the specimen identification process failed when staff members used workarounds instead of following proper protocol45. Implementation of a zero tolerance policy for all specimen types at the Childrens Hospitals and Clinics Laboratory in Minnesota was combined with a communication campaign that resulted in a 75% reduction in mislabeled specimens. A FMEA was conducted that identified the labeling phase of the pre-analytical process as the most error prone at their facility. The FMEA team focused on this process as a way to achieve error reduction and adopted a zero tolerance policy for all mislabeled specimens. A provision that allowed a physician to challenge the rejection decision in the case of irretrievable specimens was included in the redesigned process.46. After improvements have been made and benefits realized, the gains must be monitored and maintained. Continuous monitoring of mislabeled samples will pinpoint problem areas where action is needed. Timely feedback to affected departments can help staff members remember the incident more clearly and adjust their actions. A study conducted by a university hospital blood bank found that the incidence of wrong blood in tube (WBIT) was much higher in the emergency room than in other 33 department47. Incident reporting intervals were adjusted from quarterly to weekly and this change alone reduced incidents of WBIT by 33%47. Six Sigma and Lean are methodologies that use teams to collaborate in an effort to find ways reduce waste in a process. Effective methods of healthcare process improvement have been adapted from the manufacturing models. Six Sigma sets a benchmark of 3.4 defects or adverse events per million opportunities36. The chosen process is then measured against this benchmark and improvements are made to decrease defects. When the Henry Ford Hospital laboratory applied process improvements such as organization, standardization and step reduction to streamline the total testing process, defects decreased from 55% to 12.5%20,48. When they used these methods to focus on improving the rate of misidentification, the rate dropped by 62% in the histology department20. 34 PROJECT OBJECTIVES 1. Create a comprehensive hospital wide policy to standardize patient identification and specimen collection practices. A HFMEA was undertaken to determine the practices for specimen identification and labeling existing throughout the various departments of CMC. Disparities were identified in the existing policies that could lead to failures in patient identification and specimen labeling. A unified hospital policy was created that complies with the existing regulations of TJCÕs NPSG, CAP and CLIA. 2. Implementation of the new policy S-1 throughout CMC. The proposed policy S-1 was reviewed and revised by the members of the HFMEA team. Representatives from affected departments examined the logistics of implementation in their respective departments. The policy was reviewed and approved by administration. A plan to educate staff members was formulated. 3. Compare data from before and after implementation to determine if policy S-1 has an effect on the number of mislabeled specimens reported. Data on mislabeled specimens were collected after policy S-1 implementation and compared to data collected prior to implementation in order to identify any improvement in the rate of mislabeled specimens. The data were analyzed for statistical significance using the two-tailed t-test. 35 METHODS An HFMEA was used to study the patient identification and specimen labeling practices that existed at CMC. A team was formed in July 2013 and met from July through December 2013. Team members included representatives from the Emergency Department, Nursing, Respiratory Department, Operating Room, Laboratory, Radiology, Education, Standards and Administration. During meetings, each area described their process for patient identification and specimen labeling. Results of the hazard analysis identified practices that could lead to system failure. The team found disparate policies existing within CMC that could lead to failure in the patient identification process. Disparities were identified as each department reviewed their patient identification and specimen collection practices with the HFMEA team. They are listed by area and practice in Table 6. Table 6: Disparate Identification Practices within CMC Identified by FMEA Team Area Practice Emergency Department Eight scenarios were identified with varying identification practices, which included collection of blood without appropriate labels, without proper orders and without the registration process completed. Inpatient Unit Non blood Nurse collection of specimen included a practice to prelabel containers. Blood Nurse collection from central line used prelabeled tubes. MD collects specimen Specimen may not be labeled. MD collects specimen Specimen may not have appropriate requisition, orders or label. Laboratory collects specimen Arterial Blood Gas specimen 36 Table 6 (contd) Operating Room Interventional Radiology Pathology Specimens Prelabeling of slides, requisitions and container identified Table 6: Disparities in patient identification and specimen collection identified during the HFMEA. The teams recommendation to administration was to consolidate the disparate policies into one consistent policy for patient identification and specimen collection. This consolidation resulted in Policy S-01, which was drafted by the team and formally approved by administration in February 2014. Implementation of this policy started with education of staff members, which took place during March and April of 2014. Formal implementation of the policy occurred in May of 2014. The text of Policy S-01 is shown below and lists in detail the process to be followed for patient identification when a specimen is collected. Explanatory comments have been added within the body of the policy. PURPOSE: ¥ To insure the safety of patients and to insure that all laboratory/ pathology specimens have a primary label to comply with accrediting and licensing agencies. (the American Association of Blood Banks, the Joint Commission , the College of American Pathologists, the New Jersey Department of Health and the Food and Drug Administration). The practices identified during the HFMEA such as the pre-labeling of containers and specimen collection prior to orders being placed in the computer system were not compliant with the standards and regulation of the organizations listed above. POLICY: All specimens collected for the purposes of Laboratory testing (blood, non-blood, and Pathological) must be labeled at the patient bedside at the time of collection by the individual responsible for collecting the specimen(s). The primary label to be used for blood specimens will be the Laboratory System-generated bar code label except when noted in section 6 . For non-blood specimens and pathology the primary label will be a chart label. Follow Computer Downtime Procedures when 37 computers are unavailable. Chart labels are hospital information system (HIS) labels that are generated when a patient is registered. They contain the same information as displayed on the patient wristband. When computer downtimes occur or a specimen is collected without a LIS label available, they are affixed to the specimen container. QUALIFICATIONS: Qualified Individuals include: Phlebotomists, Medical Technologists (CLS, MLS and MLT), Multi-techs (Patient care technicians or nurse aides), E.D. Technicians, Registered Nurses, Patient Care Technicians, Respiratory Therapists, Mid Level Practitioners (Physician Assistants, midwives, and Nurse Practitioners) and Physicians trained in obtaining specimens. PROCEDURE: 1. The patient must be positively identified by comparing the following information on the laboratory computer generated label or chart label that is to be placed on the laboratory/ pathology specimen with the information on the patients identification bracelet: a. The patientÕs full name; last name, first name, and middle initial if available b. Medical Record Number or date of birth for Outpatients 2. The primary label whether it be a chart or laboratory computer generated label must contain the following information: a. The patientÕs full name; last name, first name, and middle initial if available b. The patientÕs medical record number or date of birth for Outpatients c. The Employee ID # of the person collecting the specimen d. The date and time the specimen was obtained. 3. The primary label must be placed on the specimen at the patientÕs bedside, immediately after the specimen is collected. 4. No label should be placed over the primary label that was completed at the patientÕs bedside. If a HIS label is affixed to the specimen container, the LIS label must be placed on the container in a manner that does not obscure the HIS label. The staff members who process the specimen must be able to see the information on both labels. This provision addresses the disparity in which HIS labels would be affixed to a specimen 38 container until a LIS label was available; the LIS label would be attached in a way that obscured the information on the original HIS label. 5. The laboratory generated computer label will serve as the primary label with the following EXCEPTIONS: ¥ Labeling of Cord Blood Specimens The following information is to appear neatly and legibly on the cord blood specimens. a. Labor and Delivery is to handwrite the newbornÕs last name and sex along with the date, time and Employee ID# of the person collecting the specimen on the label that is firmly attached to the tube. When a baby is born, it is not yet a patient and has no medical record number. b. The cord blood specimen is transported with the newborn to the Qualified individual. c. The Qualified individual personnel will then contact Admitting to register the newborn. The preprinted chart label is then applied to the tube without obscuring the handwritten information. This label will contain the newbornÕs last name, sex, hospital number, date, time, and Employee ID# of the person applying the label. ¥ Labeling of Blood Bank Specimen At the time of the specimen collection (at the patientÕs bedside), the following information is to be handwritten neatly and legibly on the blood specimen tube: 1) Patients full name (must be obtained from the patientÕs wristband) 2) Hospital/medical record number (must be obtained from the patientÕs wristband) 3) Date 4) Time 5) Employee ID# 6. In emergent situations where a team is involved in treatment, the qualified personobtaining the specimen may hand off the specimen to another qualified individual in the room for labeling. The assisting individual will be responsible for transferring the specimen in to the appropriate container and labeling the specimen. a. Specimen labels will be brought to the patientÕs bedside. The specimens are not to leave the patientÕs room until properly identified and labeled. b. The patient will be identified by the qualified individuals reading aloud the patientÕs name and medical record number or D.O.B. as it appears on the armband and comparing it to the same information on the specimen labels c. The qualified individual placing the specimen in the container will record the Employee ID numbers on the specimen label. The first Employee ID number 39 recorded is the person obtaining the specimen, the second set of Employee ID number is the person labeling the specimen. d. Exception: in the case of a physician obtaining the specimen the physician initials will be recorded. e. If the specimen label is unavailable or the patient has not yet been registered, the blood specimens must be placed in a biohazard bag. The biohazard bag will be sealed and labeled with the patients name and D.O.B at the patientÕs bedside. The sealed biohazard bag stays with the patient until the proper labels can be generated. After registration has occurred, the ID band is placed on the patient, and primary labels are available a qualified individual 1) Removes the specimen(s) from the sealed bag. 2) Validates the laboratory bar coded label with the ID band. 3) Completes the specimen label according to hospital policy. 4) Attaches the validated label to the specimen container 5) Sends specimen(s) to the Laboratory. 6) Discards all unused blood specimen tubes. In normal (non-emergent) situations, only the qualified individual collecting the specimen(s) must attach the validated label to each specimen at the patient bedside and place the specimen in the biohazard bag. If the blood specimen cannot be labeled by the qualified individual who drew the blood, the specimen must be discarded and new specimens must be drawn. Blood Bank specimens can only be labeled with a chart label when drawn by the RN or physician as a line draw. In all other cases the qualified person will hand write directly on the Blood Bank tube and send to the lab with the primary chart label on the bag. The John/Jane Doe designation may be necessary. Blood drawn by the Paramedics cannot be sent to the Blood Bank. If an emergent patient needs blood before he/she is registered (no labels are available), the patient will receive uncrossmatched O negative blood. Bacteriology: label the BACTEC Blood Culture bottles with the patients Sunquest/ Chart Label bar code label, placing it parallel to the BACTEC bottle bar code label. Avoid covering the barcode label on the bottle, as this is used to identify the bottle type in the BACTEC instrument. Blood culture bottles are unique because the instrument in which they are placed to be incubated requires the barcode that identifies the blood culture bottle as well as the barcode that contains patient identification information. INFECTION CONTROL: ¥ Maintain Standard Precautions during specimen collection and handling 40 ¥ All blood and body fluids must be placed in Biohazard Bags before delivery to the Laboratory SAFETY ¥ Utilize sharp safety device when appropriate CROSS-REFERENCES ¥ Clinical Policy and Procedure B-10: Obtaining Blood Specimens by Venipuncture ¥ Clinical Policy and Procedure S-5.2: Blood Bank Ð Labeling and Identification REFERENCES ¥ New Jersey State Department of Health CFR 42; 8:8-9.1 ¥ College of American Pathologists TRM.40230, TRM.40235 ¥ American Association of Blood Banks 5.11.2 Ð 5.11.2.3 This policy addressed the disparities described in Table 5 with the standardization of identification practices that must be followed when a specimen is collected. The new policy provided concise guidelines in one document. The wristband must be compared to the label before the specimen is obtained; the approved labels are those that are generated from the LIS and HIS. The specimen must be labeled at the bedside or in the presence of the outpatient. Employee numbers identify the person who collects the specimen. The educational format recommended by the HFMEA team was a computer based learning module but was not implemented in that design. Instead, Policy S-01 was distributed to all nursing directors and the phlebotomy supervisor to review with their staff members. Staff members included nurses, patient care techs and phlebotomists who acknowledged the elements of the new policy for patient identification and specimen labeling. The policy was distributed to staff members; it was read and recognition of the new policy requirements were acknowledged using a sheet 41 that was signed by staff members affirming they had read the new policy elements. All staff members who collect specimens were required to acknowledge the new policy. The phlebotomy department achieved 100% compliance. It has not been possible to determine the degree of compliance that was achieved by nursing staff members although inquiries were made. This is partly because the records were kept within the units and two years have elapsed. Also, there have been changes in leadership; the Operation Director of the Laboratory, Director of Standards, Director of Nursing all have moved on as have individual Unit Directors. Once education of staff members was completed, Policy S-01 was implemented. May 2013 through April 2015, all mislabeled specimen events that occurred at CMC were logged into the Midas (Xerox, USA) reporting system. This is an online system that stores information on risk events from all facilities within the system. This database is used to produce reports on the occurrence of specific types of events that may produce adverse outcomes for patients. The Midas database is used by CMC to store data on risk events; mislabeled specimens are one type of risk event. Mislabel events are logged under the Lab Incident Form; screenshots are shown below in Figure 1. 42 Figure 1: Screenshots of the Midas Reporting System Figure 1: The data entry form from the Midas reporting system in use at CMC is shown. As shown above in Figure 1, the form contained fields to input information about the incident; patient name, medical record number, facility where event occurred, event 43 date, event type, paperwork pertaining to the incident, physician, department(s) involved, staff members involved, a description of the incident, its significance in relation to patient care, what examination or treatment was given and witnesses to the incident. For example, a mislabel specimen report would contain the date and time the error occurred, when it was discovered, who was contacted in order to correct the error and how it was resolved. Usually, the specimen would be discarded, another specimen collected and if results had been entered into the LIS, a corrected report would be issued. Occasionally, the specimen would not be recollected and the order would be canceled. A report was generated from Midas that provided the total number of mislabeled specimens each month. This report was used to compare the number of mislabeled specimens that occurred monthly before the implementation of Policy S-01 with the number that occurred afterward. The period before implementation was May 2013 through April 2014 while the period after implementation was May 2014 through April 2015. The number of mislabeled specimens that occurred each month from May 2013 to April 2014 was compared to the post policy implementation number of mislabeled specimens that occurred from May 2014 through April 2015. The data were analyzed for statistical significance using the paired t-test. The paired t-test compared the variation in the number of mislabeled specimens occurring before to the number occurring after the implementation of Policy S-01. The percentage that mislabeled specimens comprised of the total number of specimens collected per month was also calculated and analyzed using linear regression analysis. Linear regression analysis was used to investigate a trend toward 44 decreased mislabeling incidents after Policy S-01 was implemented. Individual regression analyses were performed on the percentage of mislabeled specimens per month prior to policy implementation and the percentage of mislabeled specimens per month after policy implementation. A t-test was performed on the slope of the regression lines produced from the regression analyses and analyzed for significance49. The slopes of the lines produced by regression analysis were compared to see if there was a significant difference between the two data collection periods. 45 RESULTS The number of mislabeled specimens reported each month is shown in Table 6. These data were obtained from the MIDAS reporting system and are the sum of mislabeled specimens and wrong specimens. In the MIDAS system, mislabeled specimen refers to the wrong blood in the tube, while wrong specimen is a broader category that encompasses unlabeled specimens and specimen/transmittal mismatch. In the period May 2013 through April 2014, there were a total of 122 mislabeled specimens as compared to 112 total mislabeled specimens from May 2014 to April 2015. Table 7: Summary of the Number of Mislabeled Specimens from 2013 to 2015 Mislabeled Specimens 2013-2015 Jan Feb March April May June July Aug Sep Oct Nov Dec 2013 20 7 4 18 7 7 11 6 2014 14 6 8 14 10 16 11 4 13 6 13 7 2015 7 8 6 11 Table 7: The number of mislabeled specimens during each month of the study. Shaded cells represent mislabel events before policy implementation while cells without shading represent mislabel events after implementation of Policy S-01. (Community Medical Center MIDAS system) Analysis of the number of mislabeled specimens per month was performed using the two-tailed t-test and a 95% confidence interval to compare the number of mislabeled specimens in the year before implementation to the number in the year after implementation of Policy S-01. Tcalc was less than Tcrit, and the P value = 0.68. T calc is the calculated value for the t-test while T crit is the value at which the test would 46 achieve significance50. Thus, there was a failure to reject the null hypothesis (H0). The small annual decrease from 122 mislabeled specimens in the year prior to Policy S-01 implementation to 112 in the year after did not translate into a statistically significant decrease in the number of mislabeled specimens. The percentage of the number of mislabeled specimens out of the total number of specimens each month was calculated. The solid line represents the period before policy S-01 was implemented while the dotted line represents the period after implementation. These results are shown on the graph in Figure 2. Figure 2: Graph of % of Mislabeled Specimens from 2013 to 2015 Figure 2: The number of mislabeled specimens in each month of the study expressed as a percentage of total specimens. The yellow line represents the percentage of mislabeling events that occurred before Policy S-01 was implemented; the blue line represents the period after Policy S-01. Two regression analyses were performed; one for the percentage of mislabeled per month in the period before Policy S-01 was implemented and one for the percentage of mislabeled specimens in the period after Policy S-01 implementation. 0.0000%0.0050%0.0100%0.0150%0.0200%0.0250%% of Total Specimens% Mislabeled Specimens of Total Specimens May 2013 - April 2015% Before Policy S-01% After Policy S-01 47 Both analyses were performed using a 95% confidence level. The R coefficient before implementation was 0.03 and after implementation it was 0.09. There was no significant linear relationship to be discerned among the values on the scatterplot for either time period. Neither R coefficient indicated that the variation in percentage of mislabeled specimens had significant correlation to the monthly time period50. The significance F calculated for the period before Policy S-01 was 0.61; it was 0.33 in the period after, both statistics are far above the confidence threshold of 0.05. A t-test was performed on the slopes of the lines resulting from the regression analyses using a 95% confidence interval. A graph of the lines obtained from the regression analyses are shown in Figure 3. The diamonds represent the percentage of mislabeled specimens per month before policy S-01 was implemented while the squares represent the percentage of mislabeled specimens per month after policy S-01 was implemented. The solid line represents the R coefficient from the pre-implementation period while the dotted line represents the R coefficient calculated from the post implementation period. Neither line denotes a good fit of the data. The slope of the dotted line appeared to be slightly lower; a t test was performed on the slopes of the two lines to look for a significant difference. The t test comparing the slopes of the lines from the regression analysis returned a P value of 0.99, which is far above the threshold of 0.05 that would indicate a significant difference between the slopes of the lines. Therefore, there was a failure to reject the null hypothesis (H0). There was no significant change in the amount of mislabeled specimens in the period before implementation of policy S-01 when compared to the period after implementation of the policy. 48 Figure 3: Graph of Linear Regression Analyses of Pre and Post Policy S-01 Implementation Data. Figure 3: This graph shows a comparison of the results of the regression analysis performed on the pre & post implementation percentage of mislabeled specimens per month. y = -9E-08x + 0.0038R! = 0.02793y = -1E-07x + 0.0049R! = 0.093260.0000%0.0050%0.0100%0.0150%0.0200%0.0250%May-14Jun-14Aug-14Sep-14Oct-14Dec-14Jan-15Feb-15Apr-15Pre & Post ImplementationMonthLinear Regression Analysis of Pre and Post- Implementation Data%pre%PostLinear (%pre)Linear (%Post) 49 DISCUSSION During the period of data collection at CMC, the specimen mislabel rate fluctuated randomly (Fig. 2). There was no statistically significant decrease in mislabeled specimens in the period after policy S-01 was implemented when compared to the previous period. The P-value was 0.68, far above the minimum threshold of 0.05 and presenting clear statistical evidence not to reject the null hypothesis. Regression analysis also confirmed the failure to reject the null hypothesis. R-values were 0.03 for the pre-implementation data and 0.09 for the post implementation data. Neither value indicates a linear relationship to the data. The t-test comparing the slopes of the regression lines did not achieve significance. The P-value was 0.99; far above the threshold of 0.05. Thus, there was no statistical evidence that the implementation of Policy S-01 had an effect on the number of mislabeled specimens. Why did the policy fail to have an impact on the specimen identification practices of personnel within the medical center? One reason may be that policy S-01 was implemented by reading the policy and signing in acknowledgment of its content. Changing routines once they have been established may require more intensive training than was given to staff members. A more active approach may have been able to induce the staff members to take ownership of this problem. It is also possible that the policy was read and staff members signed off on it but never changed their practices to comply with policy S-01. The policy did not include corrective action for failure to comply, though individual departments have enacted their own remedial procedures. Co-incident with the data collection period, an effort was made to transition to 50 routine collection of specimens by nursing personnel, beginning with the intensive care units. In an effort to reduce the size of the dedicated phlebotomy team, the phlebotomy supervisor trained the nursing staff on the intensive care units to collect routine specimens from their patients instead of having the phlebotomy team collect them. Upon implementation of this policy, a sharp increase in mislabeled specimens from these units was noted; the practice of having nurses collect routine blood specimens was discontinued shortly after it began. The enactment of this project during the data collection period could have had an adverse effect on the results by increasing the number of mislabeled specimens through an unquantified variable. CMC has participated in the CAP Q-Tracks survey, QT-3: Laboratory Specimen Acceptability, since January 2013. This survey measures a wide range of performance indicators for specimen acceptability and gives CMC a statistical snapshot of how performance in this area compares to other institutions of similar size and complexity. A copy of the survey is included in Appendix B. One of the performance indicators is mislabeled specimens; this survey enables us to see how CMCs specimen rejection rate compares to its peers. Data submissions by the laboratory are measured against a peer group that is matched to institutional characteristics based on user submitted data. In the first quarter of 2015, 74 institutions participated in the survey and 14 institutions were in CMCs best match group. CMCs data are measured both against the entire group and the matched institutions. The data are also broken down into subcategories of rejection reasons, which will help CMC focus performance improvement efforts. In the first quarter of 2015, CMCs overall specimen rejection rate was 0.18%, but the 51 rejection rate from our best match was 0.60%. Mislabeled specimens comprised 1.2% of CMCs overall rejection rate, but the average percentage of all mislabeled specimens reported from all participating hospitals during this quarter was 1.6%. Comparison to a closely matched peer group shows that the CMC mislabel and rejection rates are below that of our peers. Limitations The failure to identify problem areas within CMC was due to the inability to retrieve and analyze data by location. Units were renamed; they were also opened and closed inconsistently during the data collection period. Factors that contributed to this practice were fluctuation of the census, an initiative that was undertaken to transition the facility to private rooms and an ongoing policy to consolidate units in order to thoroughly disinfect empty ones. Other limitations encountered were that data collection was dependent on the manual input of mislabel events and could be subject to human error. Also, the ability to identify types of collections such as line draws, nurse and physician draws, lab versus non-lab draws could have provided useful information if the data were categorized in this manner. A computer based learning module to train staff members about the new policy was planned but not implemented. Instead, the policy was given to nursing directors and the phlebotomy supervisor to introduce to their staff members. A more effective method of implementation may have produced better results. Recommendations In order to improve patient identification and specimen labeling practices in the future, the following recommendations may be effective. More staff engagement in this 52 vital area should be undertaken. If staff members are actively involved in the solutions that are enacted, it may induce them to take ownership of the problem. Successful efforts to decrease specimen mislabels have been reported by Maund et al. (2002)42; this initiative had staff members analyze the problem of patient identification, devise their own solutions and implement the solutions upon return to their units42,43. The Final Check was a simple initiative that instructed the staff member collecting the specimen to repeat out loud the last three digits of the medical record number as the wristband was matched to the label43. Ongoing initiatives like these that make patient identification a priority at all times may help staff members maintain awareness and adopt better practices. A data collection modification to include the ability to identify problem areas could lead to process improvement efforts that are focused. 53 CONCLUSION In recognition of the failure of policy S-01s implementation to decrease specimen labeling errors, and coinciding with an upgrade in the LIS, CMC will transition to the use of hand held barcode readers for use in specimen collection in 2016. Adoption of this technology will make compliance easier for staff members, if the barcode readers are used as intended and not subject it to workarounds. Analytical error in the clinical laboratory has been reduced to levels that approach five sigma30. This progress has been accomplished using barcode technology for positive patient identification, along with improvement in analyzer accuracy. Errors that occur within the analytical process not due to analyzer error have their origin in the pre-analytical phase. Efforts to reduce these errors have focused on the standardization of indicators32. The IFCC has proposed quality indicators to standardize the activities of all phases of laboratory testing and as these measures are adopted and data are collected, standardization can occur and benchmarks can be adopted. These benchmarks can provide focus for process improvement activities. It is important for institutions to follow a practice of continuous improvement, especially in high-risk areas such as specimen labeling. The focus on patient safety in the healthcare industry has intensified. Initiatives that link quality healthcare to reimbursement have become a cornerstone of CMS policy. Beginning in FY2013, CMS Hospital Based Value Purchasing plan linked quality of care to reimbursement rates51. Under this plan, payment is linked to performance based on quality measures that are applied to physicians offices, 54 ambulatory care facilities, hospitals, nursing homes, home health agencies and dialysis facilities52. The areas that hospitals report are process of care measures, readmission rates, medical imaging use and patient experiences53. In these categories, measures include timeliness and effectiveness of care, rate of hospital acquired infections, complications and mortality53. Hospitals will be rewarded for quality performance based on these measures. Healthcare facilities are being held accountable for providing quality care to their patients. Quality care is appropriate care in a safe environment and accurate specimen labeling is integral to patient safety; regulatory agencies are unanimously clear on this point. It does not matter how well an institution performs in other areas if this essential component of providing patient care is missing. Implementation of standardized systems needs to take place within and among organizations in order to reduce variability. Assessing facility systems and making comparisons with those that are models of accuracy can lead to improvement. 55 APPENDICES 56 APPENDIX A: Results of CMCÕs Hazard Analysis Table 8: Step One of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%$&'$(')#"*+)*#"*#"(,$&-&-.#/0)1&2)-#3&,4"5,#"*+)*4occasional6"*+)*)+#"-#3*"-.#0$,&)-,7frequent12yesnonoyes!"#$1,&"-#0'$-#-))+)+6-+)0)-+)-,#0*$1,&,&"-)*/#*)/0"-/&(')#7"*#"*+)*/8#,*$&-&-.#*)1)&%)+1"205,)*#/9/,)2#+"3-1uncommon2&-1"**)1,#"*+)*#&-1'5+&-.#'$,)*$'&,97frequent12yesnonoyes:;#,90)8#1"-,*"'#3&,4#0"'&19#$-+#)+51$,&"-#"7#$''#/,$77/,$77#1"20$*)/#"*+)*#,"#'$,)*$'&,9<4)1=#>#2"-,4/#"7#+$,$#"7#2&+$/#*)0"*,/#*)'$,)+#,"#'$,)*$'&,9#$-+#1"20$*)#?$(Obtain specimen order step 1HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesOutcome MeasurePerson ResponsibleManagement ConcurrenceFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisObtain specimen orderAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingTable 8: Step one of CMCÕs hazard analysis examines the risk in obtaining an order for a specimen. 57 Table 9: Step Two of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$"%&$'$"%'$"()($#7occasional9non/an/a*"#$"%+,-&.$/0%-##%*&%1$/1/1frequent4'$"()2%1,$%3"*&.%*"#$"1occasional3acknowledgement of order - Step2Person ResponsibleManagement ConcurrenceAcknowledgement of order - Step2HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome Measure Table 9: Step two of CMCs hazard analysis examines the risk areas of acknowledgment of an order. 58 Table 10: Step Three of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%&!'()*+,)-%!.$&!').+//!.($+')'+()010$/02/!1remote13,+'4)!"#$%&!'()+2(0$'!51uncommon23,+'4)/02!/-)+2(0$'!57frequent12yesyesn/an/a'+)/02!/-1occasional3Obtain needed equipment for specimen collection (container; labels; ect.)HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureObtain needed equipment for specimen collection (container; labels; ect.)- step 3Person ResponsibleManagement Concurrence Table 10: Step three of CMCÕs hazard analysis examines the risk encountered when obtaining equipment for specimen collection. 59 Table 11: Step Four of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%"&'()#)*7frequent12yesyesn/an/a+,!-$.,-/$#"$%"&',0)$(,1)(/$#"$"0*)0/2$*)')!*/$"!$/')%3&)!4occasional6&4(#3'()$(,1)(/$"!$&4(#3'()$',#3)!#/$3!$/,&)$,0),$5678$9$/#)'$:7frequent12yesyesn/an/aCompare patient labels to specimen order - step 4HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureCompare patient labels to specimen order step 4Person ResponsibleManagement ConcurrenceTable 11: Step four of CMCs hazard analysis examines the risk in comparing patient labels to the specimen order. 60 Table 12: Step Five of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%#&'!(#"!#)'*+,!*10uncommon8yesyesn/an/a-."!/#$%#&'!(#"!#)'*+,!*10uncommon8yesyesn/an/a)'*+,!*#0!'&1,#*"#)'.*+2+)'*,#+!#$%#)."2,331frequent4$%#)"1+24#!"*#5"11"-,(10frequent16yesnonoyesComplete patient ID by comparing patient ID to labels (done at patient bedside)-step 5Person ResponsibleManagement ConcurrenceComplete patient ID by comparing patient ID to labels (done at patient bedside) - step5HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 12: Step five of CMCs hazard examines the risk when patient ID is compared to patient labels at bedside. 61 Table 13: Step Six of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%&'()*+$)10occasional12yesyesn/an/a,"#$%&)+-)1uncommon2$#)&.#$+1uncommon2Explain specimen collection needed to patient - step 6Person ResponsibleManagement ConcurrenceExplain specimen collection needed to patient - step 6HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 13: Step six of CMCs Hazard Analysis examines risk in explanation of specimen collection to patient. 62 Table 14: Step Seven of CMCs Hazard Analysis Table 14: Step seven of CMCs hazard analysis examines the risk in prelabeling a specimen. 63 Table 15: Step Eight of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#"$%&'()*+",4occasional6-'()*+",%.$&/%0$&+1%2)(*"+(10uncommon8yesnonoyes3$&+1%42"5*/"+%&'()*+",10uncommon8yesyes; weakness in lab controln/an/aObtain specimen - step 8Person ResponsibleManagement ConcurrenceObtain specimen - step 8HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 15: Step eight of CMCs hazard analysis examines the risk in obtaining a specimen from a patient. 64 Table 16: Step Nine of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%&'(%()$&#$&%%4frequent8yesyesn/an/a!"#$%&'(%()$&#$'()*+)(,$%&'(%()$"-#*+)($".$/&#+(0#$&1(&21""37frequent12yesnonoyes41"05$%&'(%7frequent12yesnonoyesLabel obtained specimen at bedside - step 9Person ResponsibleManagement ConcurrenceLabel obtained specimen at bedside - step 9HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 16: Step nine of CMCÕs hazard analysis examines the risk when specimen is obtained and labeled at the bedside. 65 Table 17: Step Ten of CMCÕs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"##"$%&'('!'$)#&*$&(+,'(4frequent8yesnonoyes("$'&-.+/&$*)&!+.0'-&1*..'1)(21frequent4$*)&3+$-&(+,'('-4uncommon44.*$%&5+)"'$)10frequent16yesnonoyesLabel contains: date; time; initials (employee ID #);Line draw marked; Bld Bank hand labeled with BD; MR; Date; time; and initials or employee ID #) - step 10Person ResponsibleManagement ConcurrenceLabel contains: date; time; initials (employee ID #);Line draw marked; Bld Bank hand labeled with BD; MR; Date; time; and HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 17: Step ten of CMCs hazard analysis examines the risk in the obtaining specimens by line draw or hand label for blood bank. 66 Table 18: Step Eleven of CMCs Hazard Analysis SeverityProbabilityHaz ScoreSingle Point Weakness? Existing Control Measure ?DetectabilityProceed? !"#$%&'()*$+,$-'.$"/$%&'()*$+,$0/",.$-'.1remote10/",.$/)12+3+#)$%&'()*$0+#4$3%)(+5),$+,$-'.7occasional9yesyesn/an/a&)'6+,.$"/$(",#'5+,'#)*$3%)(+5),4occasional6Place in Bio-hazard bag - step 11Person ResponsibleManagement ConcurrencePlace in Bio-hazard bag - step 11HFMEA Step 4 - Hazard AnalysisHFMEA Step 5 - Identify Actions and OutcomesFailure Mode: First Evaluate failure mode before determining potential causesPotential CausesScoringDecision Tree AnalysisAction Type (Control, Accept, Eliminate)Actions or Rationale for StoppingOutcome MeasureTable 18: Step eleven of CMCs hazard analysis examines the risk in placing a specimen into a biohazard bag. 67 APPENDIX B: A Q-tracks Survey Report from the College of American Pathologists for Community Medical Center Figure 4: Q-Tracks Survey Quality Management Report Figure 4: An example of Q-trackÕs quarterly quality management report. College of American PathologistsQ-TRACKS Quality Management Report: January-March, 2015MonitorMailingYour Result!All Institutions DistributionThe bar graph ranges from the 10th to 90th percentiles.The thick vertical line represents the median.QT3: Laboratory Specimen Acceptability15A0.18!14D0.18!14C0.15!14B0.18!14A0.13!13D0.15!13C0.16!13B0.18!0.00.30.71.01.3Specimen Rejection Rate (%)Quality Management ReportCAP Number: 12151-01-01 68 Figure 5: Q-Tracks Executive Summary Figure 5: An example of Q-Tracks quarterly Executive Summary. Q-TRACKS 2015Executive Summary: January-March, 2015Community Medical Center99 Route 37 WToms River NJ 08755-6423QT3 - Laboratory Specimen Acceptability!Your specimen rejection rate is 0.18 for this quarter.!There are no questionable data reported this quarter.!There are no out-of-control points for this quarter detected on your control chart.!There are no significant trends (six or more successive increasing or decreasing values endingin this quarter) on your Trend Analysis Report.QT3 - Laboratory Specimen AcceptabilityExecutive SummaryCAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015Customer Service 800-323-4040, option #1For your institution's useReviewed by:Date:Comments/Actions: 69 Figure 6: Q-Tracks Individual Data Summary Figure 6: An example of Q-Tracks quarterly data summary. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityIndividual Data Summary: January-March, 2015Input Items andPerformance IndicatorsPreviousQuarterCurrentQuarter%ChangeCumulativeQuarters *Input ItemsNumber of rejected specimensTotal number of specimensRejection Reasons:Specimen lost/not receivedUnlabeled specimenMislabeled specimenIncomp. labeled spec./inadeq. filled-out formRequisition does not match specimenWrong date or time collection errorWrong collection containerAge of specimen (too old)Specimen hemolyzedLipemia or icteric specimenSpecimen clottedContaminated specimen (IV fluid dilution)Insufficient specimen quantityUnacceptable variance (delta check)Wrong temperatureOther reason464262,584 19 0 6 5 0 3 4 0161 0120 0 34111 1 0499276,090 2 0 6 4 0 0 1 0128 0131 0 59168 0 0NA-100.0 51.4 73.5NA 9.2NA -20.5NA -75.0-100.0NA -20.0 0.0NA -89.5 5.1 7.51,8921,097,277 39 0 30 15 0 6 13 0542 0480 0228537 2 0Performance IndicatorsSpecimen rejection rate (%)Rejection Reasons Breakdown (%):Specimen lost/not receivedUnlabeled specimenMislabeled specimenIncomp. labeled spec./inadeq. filled-out formRequisition does not match specimenWrong date or time collection errorWrong collection containerAge of specimen (too old)Specimen hemolyzedLipemia or icteric specimenSpecimen clottedContaminated specimen (IV fluid dilution)Insufficient specimen quantityUnacceptable variance (delta check)Wrong temperatureOther reason0.18 4.1 0.0 1.3 1.1 0.0NA 0.9NA34.7NA25.9NA 7.323.9 0.2 0.00.18 0.4 0.0 1.2 0.8 0.0 0.0 0.2 0.025.7 0.026.3 0.011.833.7 0.0 0.0NA-100.0 41.0 61.6NA 1.5NA -25.9NA -77.8NANA -27.3 -7.7NA -90.2 0.0>25%>25%>25%>25%>25%>25%>25%0.17 2.1 0.0 1.6 0.8 0.0 0.3 0.7 0.028.6 0.025.4 0.012.128.4 0.1 0.0* The cumulative quarters period is April 2014 - March 2015. You have submitted data for all four quarters.Individual Data SummaryCAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015 70 Figure 7: Q-tracks External Comparison Report Figure 7: An example of Q-Tracks quarterly comparison report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityExternal Comparison Report: January-March, 2015Current Quarter - Specimen Rejection Rate (%)External Comparison Report - Page 1CAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015!!!"#0.00.30.60.91.1Specimen Rejection Rate (%)FingerprintCluster (N=14)Customer-definedGroup (N=7)All Institutions(N=74)10th %tile(left edgeof box)25th%tileMedian(vertical line)75th%tile90th %tile(right edgeof box)0.080.220.420.651.040.140.240.390.510.580.110.240.400.560.59!Your Data0.18"Best Match0.60#2nd Best Match0.61Report KeyPeer Comparisons:All Institutions - This group includes all participants who have submitted quarterly data.Customer-defined group - Participants matching your customer-defined selections: $Hospital Complexity - General acute care$Fourth selection not used$% billable procedures from outpatient/outreach sites$Fifth selection not used$Inpatient Phlebotomy - Primarily performed by lab staffFingerprint Cluster - The 14 participants who most closely match your operational characteristics.Best Matches - Participants in your fingerprint cluster who most closely match your institution's characteristics.Performance:The bar graph ranges from the 10th to 90th percentile. The thick vertical line represents the median value.Lower percentiles (shaded area and lower) represent better relative performance.Current Quarter - Breakdown of Specimen Rejection ReasonsYour Data (%)Specimen Rejection ReasonsAggregate Percent*Unacceptable variance (delta check)Specimen clottedSpecimen hemolyzedInsufficient specimen quantityMislabeled specimenIncomp. labeled spec./inadeq. filled-out formSpecimen lost/not receivedWrong collection containerOther reasonWrong temperatureContaminated specimen (IV fluid dilution)Lipemia or icteric specimenAge of specimen (too old)Wrong date or time collection errorRequisition does not match specimenUnlabeled specimen33.726.325.711.8 1.2 0.8 0.4 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.723.334.615.8 1.6 1.3 4.8 2.6 5.2 0.5 2.9 2.2 1.2 0.4 0.2 1.0* This percent is a breakdown of the 54,438 rejected specimens for this quarter. 71 Figure 8: Q-Tracks Cumulative External Comparison Report Figure 8: An example of Q-Tracks quarterly cumulative external comparison report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityExternal Comparison Report: January-March, 2015Cumulative Quarters: April 2014 - March 2015Specimen Rejection Rate (%)External Comparison Report - Page 2CAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015!0.00.30.61.01.3Specimen Rejection Rate (%)All Institutions(N=96)10th %tile(left edgeof box)25th%tileMedian(vertical line)75th%tile90th %tile(right edgeof box)0.070.180.390.631.19!Your Data0.17Report KeyPeer Comparisons:All Institutions - This group includes participants who have submitted data for the most recent four quarters.Performance:The bar graph ranges from the 10th to 90th percentile. The thick vertical line represents the median value.Lower percentiles (shaded area and lower) represent better relative performance.Cumulative Quarters: April 2014 - March 2015Breakdown of Specimen Rejection ReasonsYour Data (%)Specimen Rejection ReasonsAggregate Percent*Specimen hemolyzedUnacceptable variance (delta check)Specimen clottedInsufficient specimen quantitySpecimen lost/not receivedMislabeled specimenIncomp. labeled spec./inadeq. filled-out formWrong collection containerWrong date or time collection errorWrong temperatureOther reasonContaminated specimen (IV fluid dilution)Lipemia or icteric specimenAge of specimen (too old)Requisition does not match specimenUnlabeled specimen28.628.425.412.1 2.1 1.6 0.8 0.7 0.3 0.1 0.0 0.0 0.0 0.0 0.0 0.033.4 2.423.315.2 7.7 1.9 1.0 2.5 0.4 0.4 4.9 2.9 1.4 1.3 0.3 1.1* This percent is a breakdown of the 201,274 rejected specimens for these quarters. 72 Figure 9: Q-Tracks Trend Analysis Report Figure 9: An example of Q-Tracks quarterly trend analysis report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityTrend Analysis Report: January-March, 2015!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!0.00.050.100.150.200.250.300.3504/0704/1404/2104/2805/0505/1205/1905/2606/0206/0906/1606/2306/3007/0707/1407/2107/2808/0408/1108/1808/2509/0109/0809/1509/2209/2910/0610/1310/2010/2711/0311/1011/1711/2412/0112/0812/1512/2212/2901/0501/1201/1901/2602/0202/0902/1602/2303/0203/0903/1603/2303/30!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Overall Average = 0.17Lower Control Limit (LCL) = 0.03Upper Control Limit (UCL) = 0.31Specimen Rejection Rate (%)Spec hemolyzedUnaccept varianceSpec clottedInsuff quantitySpeclost/ not receivedMislabel specIncomplbl or reqWrong containerWrongdateor timeIncorrect tempOther reasonContam-inatedLipemiaor icteric specimenAgeRequis.notmatchedUnlabel spec051015202530Percentages of Rejection ReasonsTrend Analysis ReportCAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015 73 Figure 10: Q-Tracks Peer Characteristics Report Figure 10: An example of Q-Tracks quarterly peer group characteristics report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityPeer Group Characteristics Report: January-March, 2015Demographics InformationPercent Match with Your Best Matching ParticipantPercent Match with Your Second Best Matching ParticipantPercent Match within Your Fingerprint ClusterInstitution AffiliationInstitution LocationTeaching HospitalResidents TrainingCAP InspectionsJCAHO InspectionsOccupied Bed Size100.0100.0 0.0100.0100.0100.0100.0 0.0100.0100.0100.0100.0100.0100.0 85.7 50.0 50.0100.0 85.7 92.9 42.9Items Chosen from Customer-Defined Group Master List (Ordered from Most to Least Important)Percent Match within Your Customer-Defined Comparison GroupHospital Complexity - General acute care% billable procedures from outpatient/outreach sitesInpatient Blood Specimens - Primary performed by lab staffTeaching Hospital - NoInstitution Location - Suburban100.0100.0100.0Selection not usedSelection not usedFingerprint Cluster List*12059-01-0518803-01-0123949-01-02Best Match: 12059-01-052nd Best Match: Not listed in Peer Directory* This list contains only those participants included in the 2015 Q-TRACKS Peer Directory.Peer Group Characteristics ReportCAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015 74 Figure 11: Q-Tracks Quarterly Historical Data Report Figure 11: An example of Q-Tracks quarterly historical data summary report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityHistorical Data Report: January-March, 2015!!!!!!!!!0.120.130.140.150.160.170.1813-A13-B13-C13-D14-A14-B14-C14-D15-ASpecimen Rejection Rate (%)Quarter"Better PerformanceSpecimen Rejection Rate (%)Performance Indicator SummaryAverage rate:Maximum rate:Minimum rate:# of qtrs data submitted:90.130.180.16Historical Data ReportCAP Number: 12151-01-01Kit #: 28146040Report Date: 05/19/2015 75 Figure 12: Q-Tracks Quarterly Summary Report Figure 12: An example of Q-Tracks quarterly summary report. Q-TRACKS 2015: QT3 - Laboratory Specimen AcceptabilityQuarterly Summary Report: January-March, 2015Performance Indicator Calculation:Specimen Rejection Rate (%) = Number of rejected specimensx 100Total number of specimens (rejected and not rejected)Performance Indicator Percentiles:Current Quarter(N=74)Cumulative Quarters(N=96)10th25th50th75th90th0.080.220.420.651.040.070.180.390.631.19Demographics Summary*:Institution Type(N=69)PercentNongovernmentalVoluntary, nonprofit hospitalProprietary hospitalPrivate, independent laboratoryGroup practiceIndependent blood bankUniversity hospitalChildren's hospitalSystem/Integrated Delivery NetworkOther, nongovernmental40.613.0 4.3 0.0 0.0 0.0 0.0 5.8 2.9Governmental, NonfederalState chronic hospitalState acute hospitalCounty hospitalCity hospitalUniversity hospitalOther, governmental, nonfederal0.02.97.22.92.94.3Governmental, FederalVeterans hospitalDept of Defense (DOD)Public health, nonhospitalIndian Health Service (IHS)Other, governmental, federal2.95.81.40.02.9Institution Location(N=70)PercentCitySuburbanRuralFederal installationOther58.624.314.3 2.9 0.0Teaching Hospital(N=64)PercentYesNo51.648.4Residents Training(N=70)PercentYesNo31.468.6Inspections(N=70)PercentCAPYesNo77.122.9Joint CommissionYesNo22.977.1Occupied Bed Size(N=63)Percent1 - 150151 - 300301 - 450451 - 600> 60034.931.714.3 9.5 9.5* The Demographics Summary includes data from participants who have submitted quarterly data for this monitor.QT3 - Laboratory Specimen AcceptabilityQuarterly Summary ReportReport Date: 05/19/2015 76 APPENDIX C: Glossary of Terms Accession Ð to assign a unique number to a specimen Approved accrediting organization Ð an organization approved by CMS and state health departments to perform inspections Adverse events Ð unexpected and/or improper occurrence during a medical procedure Clinical stage Ð the part of the total testing process that occurs before the specimen is obtained and after the results are reported that happens outside the laboratory Deemed status Ð an organization that has been approved to conduct surveys in lieu of CMS and state health departments in recognition that its standards are at least as rigorous as that of CMS or state health departments Delta failure Ð the difference between the present and past laboratory result, which exceeds a predefined limit Endpoint Ð the intended outcome of a clinical trial or experiment HFMEA Ð Healthcare Failure Mode Effects Analysis; a proactive method to assess risk in a process Hazard Analysis Ð the first step in the identification of risk in a process used by HFMEA Histology block Ð part of the preparation of tissue for analysis by a pathologist during which the tissue to be examined is embedded in paraffin Identifier Ð unique attribute to identify a patient Irretrievable specimens Ð samples that are difficult to obtain such as bone marrow or surgical specimens Laterality Ð pertaining to the left or right side of the body Multidisciplinary approachÐ drawing from different areas in order to reach a solution Null hypothesis Ð a statement that assumes no statistical significance in a set of observations P-value Ð a calculation using observed experimental results to test a hypothesis 77 Peer inspection Ð an inspection conducted by colleagues within the same discipline Percentile Ð a statistical measure that indicates where a distribution falls on a scale of one hundred Q-probes Ð short term studies sponsored by CAP that assess quality improvement efforts in laboratories Q-tracks Ð part of a CAP program to monitor laboratory indicators that includes peer comparison Relative error Ð the amount of inaccuracy in a measurement expressed as a percentage Relative frequency Ð the number of measured events divided by the total number of events Scoring Matrix Ð the process of assigning a numeric value to a failure mode based on occurrence, severity and probability of detection of the failure Sigma metrics Ð types of measurements used in six sigma methodology that assess the quality of a product Six sigma Ð a process improvement strategy that seeks to reduce defects to no more than 3.4 per million opportunties Tcalc Ð a calculation performed to see if the null hypothesis is supported Tcrit Ð the statistical point at which the null hypothesis is rejected Two-tailed t-test Ð a statistical test used to determine if two sets of data are statistically different 78 WORKS CITED 79 WORKS CITED 1. Kohn, L. T., Corrigan, J. M. & Donaldson, M. S. To Err Is Human: Building a Safer Health System. (2002). 2. Clancy, C. M. Ten years after To Err is Human. Am J Med Qual 24, 525Ð528 (2009). 3. Agency for Healthcare Research and Quality (AHRQ). 2014 National Healthcare Quality and Disparities Report. 1Ð40 (2015). 4. Rivers, P. A., Dobalian, A. & Germinario, F. A. A review and analysis of the clinical laboratory improvement amendment of 1988: compliance plans and enforcement policy. Health Care Manage Rev 30, 93Ð102 (2005). 5. Code of Federal Regulations. Code of Federal Regulations at 6. History of The Joint Commission | Joint Commission. jointcommission.org at 7. Stern, C. 2015 National Patient Safety Goals. Nursing Made Incredibly Easy (2015). 8. Milam, J. D. & Carson, J. G. A brief history of the formation and transformation of the College of American Pathologists House of Delegates. Arch Pathol 132, 1936Ð1939 (2008). 9. international_cap_fact_sheet.pdf. cap.org at 10. Hamlin, W. B. Requirements for accreditation by the College of American Pathologists Laboratory Accreditation Program. Arch Pathol 123, 465Ð467 (1999). 11. About Us - College of American Pathologists. cap.org at 12. College of American Pathologists. Laboratory General Checklist. 13. College of American Pathologists - About CAP. cap.org at 80 14. Roback, J. D. Technical Manual. (American Association of Blood Banks (AABB), 2011). 15. Howanitz, P. J., Renner, S. W. & Walsh, M. K. Continuous wristband monitoring over 2 years decreases identification errors: a College of American Pathologists Q-Tracks Study. Arch Pathol 126, 809Ð815 (2002). 16. Grimm, E. et al. Blood bank safety practices: mislabeled samples and wrong blood in tube--a Q-Probes analysis of 122 clinical laboratories. Arch Pathol 134, 1108Ð1115 (2010). 17. Maskens, C. et al. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety. Transfusion (2013). doi:10.1111/trf.12240 18. Bixenstine, P. J. et al. Developing and pilot testing practical measures of preanalytic surgical specimen identification defects. Am J Med Qual 28, 308Ð314 (2013). 19. Nakhleh, R. E., Idowu, M. O., Souers, R. J., Meier, F. A. & Bekeris, L. G. Mislabeling of cases, specimens, blocks, and slides: a college of american pathologists study of 136 institutions. Arch Pathol 135, 969Ð974 (2011). 20. Zarbo, R. J. et al. The Henry Ford Production System: reduction of surgical pathology in-process misidentification defects by bar code-specified work process standardization. Am. J. Clin. Pathol. 131, 468Ð477 (2009). 21. When a rose is not a rose. cap.org at 22. CAP Today Disabling mislabeling. captodayonline.com at 23. Brown, J. E., Smith, N. & Sherfy, B. R. Decreasing mislabeled laboratory specimens using barcode technology and bedside printers. J Nurs Care Qual 26, 13Ð21 (2011). 24. Hayden, R. T. et al. Computer-assisted bar-coding system significantly reduces clinical laboratory specimen identification errors in a pediatric oncology hospital. J. Pediatr. 152, 219Ð224 (2008). 25. Francis, D. L., Prabhakar, S. & Sanderson, S. O. A Quality Initiative to Decrease Pathology SpecimenÐLabeling Errors Using Radiofrequency Identification in a 81 High-Volume Endoscopy Center. Am J Gastroenterol 104, 972Ð975 (2009). 26. College of American Pathologists - When a Rose Is Not a Rose. cap.org at 27. Plebani, M. M. Quality indicators to detect pre-analytical errors in laboratory testing. Clin Biochem Rev 33, 85Ð88 (2012). 28. Lundberg, G. D. Acting on significant laboratory results. JAMA 245, 1762Ð1763 (1981). 29. Plebani, M., Laposata, M. & Lundberg, G. D. The brain-to-brain loop concept for laboratory testing 40 years after its introduction. Am. J. Clin. Pathol. 136, 829Ð833 (2011). 30. Plebani, M. The detection and prevention of errors in laboratory medicine. Annals of Clinical Biochemistry 47, 101Ð110 (2010). 31. Hawkins, R. Managing the pre- and post-analytical phases of the total testing process. Ann Lab Med 32, 5Ð16 (2012). 32. Sciacovelli, L. et al. Quality Indicators in Laboratory Medicine: from theory to practice. Preliminary data from the IFCC Working Group Project "Laboratory Errors and Patient Safety". Clin. Chem. Lab. Med. 49, 835Ð844 (2011). 33. Wagar, E. A., Stankovic, A. K., Raab, S., Nakhleh, R. E. & Walsh, M. K. Specimen labeling errors: a Q-probes analysis of 147 clinical laboratories. Arch Pathol 132, 1617Ð1622 (2008). 34. Dunn, E. J. & Moga, P. J. Patient misidentification in laboratory medicine: a qualitative analysis of 227 root cause analysis reports in the Veterans Health Administration. Arch Pathol 134, 244Ð255 (2010). 35. Walz, S. E. & Darcy, T. P. Patient safety & post-analytical error. Clinics in Laboratory Medicine 33, 183Ð194 (2013). 36. Nevalainen, D. et al. Evaluating laboratory performance on quality indicators with the six sigma scale. Arch Pathol 124, 516Ð519 (2000). 37. Plebani, M. & Lippi, G. Improving the post-analytical phase. Clin. Chem. Lab. Med. 48, 435Ð436 (2010). 82 38. College of American Pathologists, Valenstein, P. N., Raab, S. S. & Walsh, M. K. Identification errors involving clinical laboratories: a College of American Pathologists Q-Probes study of patient and specimen identification errors at 120 institutions. Arch Pathol 130, 1106Ð1113 (2006). 39. Stalhandske, E., DeRosier, J., Wilson, R. & Murphy, J. Healthcare FMEA in the Veterans Health Administration. 1Ð4 (2009). 40. DeRosier, J., Stalhandske, E., Bagian, J. P. & Nudell, T. Using health care failure mode and effect analysisª: the VA National Center for Patient Safety's prospective risk analysis system. Joint Commission Journal on Quality and Patient Safety 28, 248Ð267 (2002). 41. Kemp, G. M., Bird, C. E. & Barth, J. H. Short-term interventions on wards fail to reduce preanalytical errors: results of two prospective controlled trials. Annals of Clinical Biochemistry 49, 166Ð169 (2012). 42. Maund, T. T., Espinosa, J. J. & Kosnik, L. L. Beyond rapid cycle: a one-day safety summit tool to prevent mislabeled laboratory specimens. Jt Comm J Qual Improv 28, 127Ð128 (2002). 43. SC hospitals taking on mislabeled specimens. Hosp Peer Rev 37, 101Ð102 (2012). 44. Plebani, M., Sciacovelli, L., Aita, A., Padoan, A. & Chiozza, M. L. Quality Indicators To Detect Pre-Analytical Errors In Laboratory Testing. Clin. Chim. Acta (2013). doi:10.1016/j.cca.2013.07.033 45. Wagar, E. A. E. & Yuan, S. S. The laboratory and patient safety. Clinics in Laboratory Medicine 27, 909Ð9ix (2007). 46. Dock, B. Improving the accuracy of specimen labeling. Clin Lab Sci 18, 210Ð212 (2005). 47. Quillen, K. & Murphy, K. Quality improvement to decrease specimen mislabeling in transfusion medicine. Arch Pathol 130, 1196Ð1198 (2006). 48. Zarbo, R. J. & D'Angelo, R. The Henry ford production system: effective reduction of process defects and waste in surgical pathology. Am. J. Clin. Pathol. 128, 1015Ð1022 (2007). 49. Andrade, J. M. & Est”vez-P”rez, M. G. Statistical comparison of the slopes of two regression lines: A tutorial. Anal. Chim. Acta 838, 1Ð12 (2014). 50. Motulsky, H. Intuitive biostatistics. (New York : Oxford University Press, 1995). 83 51. HBVP explanation. khn.org at 52. Statute HVBP. Code of Federal Regulations at 53. Process measures. medicare.gov at