Please use this identifier to cite or link to this item: https://idr.l4.nitk.ac.in/jspui/handle/123456789/10676
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNarasimhan, M.G.-
dc.contributor.authorSowmya, Kamath S.-
dc.date.accessioned2020-03-31T08:22:53Z-
dc.date.available2020-03-31T08:22:53Z-
dc.date.issued2018-
dc.identifier.citationMultimedia Tools and Applications, 2018, Vol.77, 11, pp.13173-13195en_US
dc.identifier.urihttps://idr.nitk.ac.in/jspui/handle/123456789/10676-
dc.description.abstractThe emergence of novel techniques for automatic anomaly detection in surveillance videos has significantly reduced the burden of manual processing of large, continuous video streams. However, existing anomaly detection systems suffer from a high false-positive rate and also, are not real-time, which makes them practically redundant. Furthermore, their predefined feature selection techniques limit their application to specific cases. To overcome these shortcomings, a dynamic anomaly detection and localization system is proposed, which uses deep learning to automatically learn relevant features. In this technique, each video is represented as a group of cubic patches for identifying local and global anomalies. A unique sparse denoising autoencoder architecture is used, that significantly reduced the computation time and the number of false positives in frame-level anomaly detection by more than 2.5%. Experimental analysis on two benchmark data sets - UMN dataset and UCSD Pedestrian dataset, show that our algorithm outperforms the state-of-the-art models in terms of false positive rate, while also showing a significant reduction in computation time. 2017, Springer Science+Business Media, LLC.en_US
dc.titleDynamic video anomaly detection and localization using sparse denoising autoencodersen_US
dc.typeArticleen_US
Appears in Collections:1. Journal Articles

Files in This Item:
File Description SizeFormat 
4 Dynamic video anomaly.pdf2.88 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.