Please use this identifier to cite or link to this item: https://idr.l4.nitk.ac.in/jspui/handle/123456789/16504
Title: O-SegNet: Robust Encoder and Decoder Architecture for Objects Segmentation From Aerial Imagery Data
Authors: Eerapu K.K.
Lal S.
Narasimhadhan A.V.
Issue Date: 2021
Citation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. , , p. -
Abstract: The segmentation of diversified roads and buildings from high-resolution aerial images is essential for various applications, such as urban planning, disaster assessment, traffic congestion management, and up-to-date road maps. However, a major challenge during object segmentation is the segmentation of small-sized, diverse shaped roads, and buildings in dominant background scenarios. We introduce O-SegNet- the robust encoder and decoder architecture for objects segmentation from high-resolution aerial imagery data to address this challenge. The proposed O-SegNet architecture contains Guided-Attention (GA) blocks in the encoder and decoder to focus on salient features by representing the spatial dependencies between features of different scales. Further, GA blocks guide the successive stages of encoder and decoder by interrelating the pixels of the same class. To emphasize more on relevant context, the attention mechanism is provided between encoder and decoder after aggregating the global context via an 8 Level Pyramid Pooling Network (PPN). The qualitative and quantitative results of the proposed and existing semantic segmentation architectures are evaluated by utilizing the dataset provided by Kaiser et al. Further, we show that the proposed O-SegNet architecture outperforms state-of-the-art techniques by accurately preserving the road connectivity and structure of buildings. IEEE
URI: https://doi.org/10.1109/TETCI.2020.3045485
http://idr.nitk.ac.in/jspui/handle/123456789/16504
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.