Joint motion boundary detection and CNN-based feature visualization for video object segmentation

Kamranian Z, Nilchi ARN, Sadeghian H, Tombari F, Navab N (2020)


Publication Type: Journal article

Publication year: 2020

Journal

Book Volume: 32

Pages Range: 4073-4091

Journal Issue: 8

DOI: 10.1007/s00521-019-04448-7

Abstract

This paper presents a video object segmentation method which jointly uses motion boundary and convolutional neural network (CNN)-based class-level maps to carry out the co-segmentation of the frames. The key characteristic of the proposed approach is a combination of those two sources of information to create initial object and background regions. These regions are employed within the co-segmentation energy function. The motion boundary map detects the areas which contain the object movement, and the CNN-based class saliency map determines the regions with more impact on acquiring the correct network classification. The proposed approach can be implemented on unconstrained natural videos which include changes in an object’s appearance, rapidly moving background, object deformation in non-rigid moving, rapid camera motion and even the existence of a static object. Experimental results on two challenging datasets (i.e., Davis and SegTrackv2 datasets) demonstrate the competitive performance of the proposed method compared with the state-of-the-art approaches.

Involved external institutions

How to cite

APA:

Kamranian, Z., Nilchi, A.R.N., Sadeghian, H., Tombari, F., & Navab, N. (2020). Joint motion boundary detection and CNN-based feature visualization for video object segmentation. Neural Computing & Applications, 32(8), 4073-4091. https://doi.org/10.1007/s00521-019-04448-7

MLA:

Kamranian, Zahra, et al. "Joint motion boundary detection and CNN-based feature visualization for video object segmentation." Neural Computing & Applications 32.8 (2020): 4073-4091.

BibTeX: Download