Acta orthopaedica | 2017 | Kleinlugtenbelt YV, Groen SR, Ham SJ, Kloen P
Journal and index pages often block iframe embedding. This reader keeps the evidence details in Orthonotes and leaves the source page one click away.
[Indexed for MEDLINE] 3. J Orthop Sci. 2024 Jan;29(1):133-137. doi: 10.1016/j.jos.2022.11.010. Epub 2022 Nov 29. Distal radius fractures: Classifications concordance among orthopedic residents on a teaching hospital. Peña-Martínez VM(1), Villanueva-Guerra E(1), Tamez-Mata Y(1), Simental-Mendía M(1), Gallardo-Madrid A(1), Blázquez-Saldaña J(1), Acosta-Olivo C(2). Author information: (1)Universidad Autonoma de Nuevo Leon, Orthopedic Trauma Service, University Hospital "Dr. José Eleuterio González", School of Medicine, Monterrey, Mexico. (2)Universidad Autonoma de Nuevo Leon, Orthopedic Trauma Service, University Hospital "Dr. José Eleuterio González", School of Medicine, Monterrey, Mexico. Electronic address: dr.carlosacosta@gmail.com. BACKGROUND: Several classification systems have been developed to support orthopedic surgeons regarding diagnostic, treatment, or prognostic outcomes of distal radius fracture (DRF). However, the best classification system for this fracture remains controversial. We aimed to identify the reliability of three different DRF classifications among orthopedists in training (medical residents). METHODS: Orthopedic residents (n = 22) evaluated thirty cases of DRF in anteroposterior and lateral projections in three different periods (0, 6, 12 months). Each radiography was sorted with three different classifications: Frykman, AO/OTA, and Jupiter-Fernandez. All assessments were blinded to the investigators. The inter- and intra-observer reliability was evaluated using the Cohen's kappa coefficient. An additional analysis was performed for a simpler sub-classification of the AO/OTA (27, 9, or 3 groups). RESULTS: Inter-observer agreement for AO/OTA, Frykman, and Jupiter-Fernandez classifications was slight (k = 0.15), fair (k = 0.31), and fair (k = 0.30), respectively. Intra-observer agreement showed similar results: AO/OTA, k = 0.14; Frykman, k = 0.28; and Jupiter-Fernandez, k = 0.28. When the AO/OTA classification was simplified (9 or 3 descriptions), the inter-observer agreement improved from slight (k = 0.16) to fair (k = 0.21 and k = 0.30, respectively). A similar improvement from slight (k = 0.14) to fair (k = 0.32 and k = 0.21) was detected for intra-observer agreement. CONCLUSIONS: The more complex the DRF classification system, the more complex is to reach reliable inter- and intra-observer agreements between orthopedic trainees. Senior residents did not necessarily show greater kappa values in DRF classifications. Copyright © 2022 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved. DOI: 10.1016/j.jos.2022.11.010
This article has not been linked to a wiki topic yet.
This article has not been linked to a case yet.
This article has not been linked to an atlas yet.