Super-Resolution PET Imaging Using Convolutional Neural Networks.
Clicks: 220
ID: 96018
2020
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Star Article
76.7
/100
220 views
176 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).Reference Key |
song2020superresolutionieee
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Song, Tzu-An;Chowdhury, Samadrita Roy;Yang, Fan;Dutta, Joyita; |
Journal | ieee transactions on computational imaging |
Year | 2020 |
DOI | 10.1109/tci.2020.2964229 |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.