Orthonotes
Orthonotes
by the.bonestories
v3.0 Fusion
v3.0 Fusion
PubMed Original Article Evidence Unclassified

Langenskiöld Classification for Blount Disease: Is It Reliable?

Indian journal of orthopaedics | 2019 | Erkus S, Turgut A, Kalenderer O

In-App Reader

Open Source

Journal and index pages often block iframe embedding. This reader keeps the evidence details in Orthonotes and leaves the source page one click away.

Source
PubMed
Type
Original Article
Evidence
Unclassified

Abstract

Conflict of interest statement: There are no conflicts of interest. 3. J Pediatr Orthop B. 2020 Jul;29(4):311-316. doi: 10.1097/BPB.0000000000000692. Assessment of the reliability and reproducibility of the Langenskiöld classification in Blount's disease. du Plessis J(1), Firth GB(2), Robertson A(3). Author information: (1)University of the Witwatersrand, Johannesburg. (2)Chris Hani Baragwanath Academic, Hospital, Soweto. (3)Charlotte Maxeke Johannesburg, Academic Hospital, Johannesburg, South Africa. The Langenskiöld classification is the most commonly utilized classification system for the radiological features of Blount's disease. Although there is only a single study found on the interobserver variability and none found on the intraobserver variability, it is commonly used for prognostication and guiding management decisions. The aim of this study was to determine the reliability and reproducibility of the Langenskiöld classification. A retrospective review of radiographs was done of patients treated for infantile and juvenile Blount's disease at Chris Hani Baragwanath Academic Hospital from 2006 to 2016. There were 70 radiographs of acceptable quality, which were reviewed and staged on two occasions according to the Langenskiöld classification by three orthopaedic consultants and three orthopaedic surgery senior residents. Pearson correlation coefficients, percentage agreements, and κ statistics were used to evaluate both the reliability and reproducibility. Of the 70 images staged, only two (2.9%) were staged the same by all six observers, and 20 (28.6%) images differed by a single stage. The consultants had 17 (24.3%) images staged the same whereas the residents had 12 (17.1%) images staged the same. The overall κ for all six observers showed a fair agreement of 0.24. Again, the consultants had a higher κ-value compared to residents of 0.25 and 0.24, respectively. The reproducibility amongst all observers was fair with a κ-value of 0.38. The consultants had a higher mean score of 0.48 compared to 0.26 for the residents. There was only a fair overall reliability and reproducibility amongst the six observers. We recommend the Langenskiöld classification be used with caution when being used for prognostication and management planning as well as when interpreting any research relying on this classification. Level of evidence: Level III, diagnostic study. DOI: 10.1097/BPB.0000000000000692

Linked Wiki Topics

This article has not been linked to a wiki topic yet.

Linked Cases

This article has not been linked to a case yet.

Linked Atlases

This article has not been linked to an atlas yet.