This paper examines a new problem in large scale stream data: abnormality detection which is localised to a data segmentation process. Unlike traditional abnormality detection methods which typically build one unified model across data stream, we propose that building multiple detection models focused on different coherent sections of the video stream would result in better detection performance. One key challenge is to segment the data into coherent sections as the number of segments is not known in advance and can vary greatly across cameras; and a principled way approach is required. To this end, we first employ the recently proposed infinite HMM and collapsed Gibbs inference to automatically infer data segmentation followed by constructing abnormality detection models which are localised to each segmentation. Importantly, the use of this non-parametric method means that the number of models do not need to be specified apriori and can grow with the data. We demonstrate the superior performance of the proposed framework in a real-world surveillance camera data over 14 days.
[ bib .pdf ]