Detecting Closely Spaced and Occluded Pedestrians Using Specialized Deep Models for Counting

Ghosh S, Amon P, Hutter A, Kaup A (2017)


Publication Language: English

Publication Type: Conference contribution, Conference Contribution

Publication year: 2017

Event location: St. Petersburg, Florida US

ISBN: 978-1-5386-0462-5

DOI: 10.1109/VCIP.2017.8305064

Abstract

Pedestrian detection is an important task in surveillance applications and becomes particularly challenging when pedestrians are close together or occluding one another. This paper presents a novel approach to detect pedestrians in such challenging scenarios. A deep convolutional neural network trained for counting is specialized to count one pedestrian. The feature extractor learned thereby is exploited to detect one pedestrian at a time iteratively. For the base counting model and the specialization, extensive annotation efforts are not required since only a single number at the image level is used. Use of our method on pedestrian datasets with occlusion showed an improvement in the average miss rate values as compared to other methods for handling occlusion.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Ghosh, S., Amon, P., Hutter, A., & Kaup, A. (2017). Detecting Closely Spaced and Occluded Pedestrians Using Specialized Deep Models for Counting. In Proceedings of the IEEE Visual Communications and Image Processing (VCIP). St. Petersburg, Florida, US.

MLA:

Ghosh, Sanjukta, et al. "Detecting Closely Spaced and Occluded Pedestrians Using Specialized Deep Models for Counting." Proceedings of the IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, Florida 2017.

BibTeX: Download