Semantic Segmentation with Context Encoding and Multi-Path Decoding.

Clicks: 195
ID: 96729
2020
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Semantic image segmentation aims to classify every pixel of a scene image to one of many classes. It implicitly involves object recognition, localization, and boundary delineation. In this paper, we propose a segmentation network called CGBNet to enhance the paring results by context encoding and multi-path decoding. We first propose a context encoding module that generates context contrasted local feature to make use of the informative context and the discriminative local information. This context encoding module greatly improves the segmentation performance, especially for inconspicuous objects. Furthermore, we propose a scale-selection scheme to selectively fuse the parsing results from different-scales of features at every spatial position. It adaptively selects appropriate score maps from rich scales of features. To improve the parsing results of boundary, we further propose a boundary delineation module that encourages the location-specific very-low-level feature near the boundaries to take part in the final prediction and suppresses them far from the boundaries. Without bells and whistles, the proposed segmentation network achieves very competitive performance in terms of all three different evaluation metrics consistently on the four popular scene segmentation datasets, Pascal Context, SUN-RGBD, Sift Flow, and COCO Stuff.
Reference Key
ding2020semanticieee Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Ding, Henghui;Jiang, Xudong;Shuai, Bing;Liu, Ai Qun;Wang, Gang;
Journal ieee transactions on image processing : a publication of the ieee signal processing society
Year 2020
DOI
10.1109/TIP.2019.2962685
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.