The Pan African medical journal | 2018 | Gumustas S, Tosun HB, Isyar M, Serbest S
Journal and index pages often block iframe embedding. This reader keeps the evidence details in Orthonotes and leaves the source page one click away.
[Indexed for MEDLINE] Conflict of interest statement: The authors declare no competing interests. 12. BMC Musculoskelet Disord. 2022 Jan 25;22(Suppl 2):1063. doi: 10.1186/s12891-022-05007-3. Femoral neck fracture: the reliability of radiologic classifications. Cazzato G(1)(2), Oliva MS(2), Masci G(2)(3), Vitiello R(2), Smimmo A(4), Matrangolo MR(2), Palmacci O(2), D'Adamio S(1)(2), Ziranu A(2). Author information: (1)RomaPRO Center for Hip and Knee Arthroplasty, Polo Sanitario San Feliciano, Rome, Italy. (2)Department of Orthopaedics and Traumatology, Fondazione Policlinico Universitario A. Gemelli IRCCS - Università Cattolica del Sacro Cuore, Rome, Italy. (3)UOC Orthopaedic and Traumatology, Children's Hospital Bambino Gesù, Rome, Italy. (4)Department of Orthopaedics and Traumatology, Fondazione Policlinico Universitario A. Gemelli IRCCS - Università Cattolica del Sacro Cuore, Rome, Italy. alessandro.smimmo01@icatt.it. BACKGROUND: Femoral neck fractures (FNF) are one of the most common injury in the elderly. A valid radiographic classification system is mandatory to perform the correct treatment and to allow surgeons to facilitate communication. This study aims to evaluate reliability of 2018 AO/OTA Classification, AO/OTA simplified and Garden classification. METHODS: Six Orthopaedic surgeons, divided in three groups based on trauma experience, evaluated 150 blinded antero-posterior and latero-lateral radiography of FNF using Garden classification, 2018 AO/OTA and simplified AO/OTA classification. One month later, the radiographs were renumbered and then each observer performed a second evaluation of the radiographs. The Kappa statistical analysis was used to determine the reliability of the classifications. Cohen's Kappa was calculated to determine intra and inter observer reliability. Fleiss' Kappa was used to determine multi-rater agreement. RESULTS: The k values of interobserver reliability for Garden classification was from 0,28 to 0,73 with an average of 0,49. AO classification showed reliability from 0,2 to 0,42, with average of 0,30. Simplified AO/OTA classification showed a reliability from 0,38 to 0,58 with an average of 0,48. The values of intra observer reliability for Garden classification was from 0,48 to 0,79 with an average of 0,63. AO classification showed reliability from 0,2 to 0,64 with an average of 0,5. Simplified AO/OTA classification showed a reliability from 0,4 to 0,75 with an average of 0,61. CONCLUSION: The revised 2018 AO/OTA classification simplified the previous classification of intracapsular fracture but remain unreliable with only fair interobserver reliability. The simplified AO/OTA classification show a reliability similar to Garden classification, with a moderate interobserver reliability. The experience of the surgeons seems not to improve reliability. No classification has been shown to be superior in terms of reliability. © 2022. The Author(s). DOI: 10.1186/s12891-022-05007-3 PMCID: PMC8787877
This article has not been linked to a wiki topic yet.
This article has not been linked to a case yet.
This article has not been linked to an atlas yet.