Automated Meningioma Segmentation in Multiparametric MRI : Comparable Effectiveness of a Deep Learning Model and Manual Segmentation.
Clicks: 229
ID: 96005
2020
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Steady Performance
71.4
/100
228 views
185 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
Volumetric assessment of meningiomas represents a valuable tool for treatment planning and evaluation of tumor growth as it enables a more precise assessment of tumor size than conventional diameter methods. This study established a dedicated meningioma deep learning model based on routine magnetic resonance imaging (MRI) data and evaluated its performance for automated tumor segmentation.The MRI datasets included T1-weighted/T2-weighted, T1-weighted contrast-enhanced (T1CE) and FLAIR of 126 patients with intracranial meningiomas (grade I: 97, grade II: 29). For automated segmentation, an established deep learning model architecture (3D deep convolutional neural network, DeepMedic, BioMedIA) operating on all four MR sequences was used. Segmentation included the following two components: (i) contrast-enhancing tumor volume in T1CE and (ii) total lesion volume (union of lesion volume in T1CE and FLAIR, including solid tumor parts and surrounding edema). Preprocessing of imaging data included registration, skull stripping, resampling, and normalization. After training of the deep learning model using manual segmentations by 2 independent readers from 70 patients (training group), the algorithm was evaluated on 56 patients (validation group) by comparing automated to ground truth manual segmentations, which were performed by 2 experienced readers in consensus.Of the 56 meningiomas in the validation group 55 were detected by the deep learning model. In these patients the comparison of the deep learning model and manual segmentations revealed average dice coefficients of 0.91 ± 0.08 for contrast-enhancing tumor volume and 0.82 ± 0.12 for total lesion volume. In the training group, interreader variabilities of the 2 manual readers were 0.92 ± 0.07 for contrast-enhancing tumor and 0.88 ± 0.05 for total lesion volume.Deep learning-based automated segmentation yielded high segmentation accuracy, comparable to manual interreader variability.
Abstract Quality Issue:
This abstract appears to be incomplete or contains metadata (262 words).
Try re-searching for a better abstract.
| Reference Key |
laukamp2020automatedclinical
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Laukamp, Kai Roman;Pennig, Lenhard;Thiele, Frank;Reimer, Robert;Görtz, Lukas;Shakirin, Georgy;Zopfs, David;Timmer, Marco;Perkuhn, Michael;Borggrefe, Jan; |
| Journal | clinical neuroradiology |
| Year | 2020 |
| DOI |
10.1007/s00062-020-00884-4
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.