Fast Moving Object Recognition

Abstract

The Brain-Like Artificial Intelligence (BLAI) is pioneered by Prof.Nikola Kasabov and here it is applied to a specific application.

Moving object recognition is a challenging problem in computational intelligence. Fast moving object is considered as the one which could not easily be captured by conventional cameras in real time. The typical examples encompass fast moving cars, flying rockets, bouncing ping-pong balls, tennis balls, balancing pencils etc. It is impossible to recognize such moving objects without using a suitable algorithm and effective software system which are capable to learn and recognize patterns from complex Spatio- and Spectro-Temporal Data (SSTD). Deep learning has improved machine learning in computer vision from end to end. In this paper, we propose a new methodology for deep learning of video data and for accurate classification of moving objects captured in the data using eSNN (evolving Spike Neural Network). Taking video footage encapsulating moving objects as input data, we conduct convolution operations for each video frame by using a Gaussian filter as the first step of deep learning, then adaptive down sampling is used to shrink the video frame both in width and height, after that spike encoding of these features is applied over time to identify the changes of each image block. Finally, we use the spikes of each 10 ×10 block of video frames as features and import them into NeuCube for training and testing using dynamic evolving spiking neural network as a classifier to classify movement of the objects. Compared to other deep neural networks and other machine learning techniques, our NeuCube model has outperformed in various scenes for fast moving object recognition using spatio- and spectro- temporal video data, achieving accuracy of about 90%. It can be used for both high and low-resolution videos. Moreover, it allows to be further trained on new data in an on-line and incremental mode.

FIGURE.1. Proposed research structure in stage 1, capture videos with moving objects and then generate spike event based frames into CSV files. These CSV files will then be used as data sets for NeuCube.

FIGURE.2. Video footage with and without motions (a) No motion was detected; (b) Spike changes based on motions and generate ‘on’ and ‘off’ events.

FIGURE.3. Video frames with moving cars (a) Original video frame with the size of 1280×720. (b) Frame from DVS simulator.

FIGURE.4. The spike event based frames are divided into 100 blocks, each block consists of 128×72 pixels, the mean filter has been applied to evaluate the pixel output.

Related Papers and Benchmarking

The proposed methods and systems, when compared with traditional statistical and machine learning methods, showed superior results in the following aspects:

  1. Better data analysis and classification/regression accuracy (by 10 to 40%);
  2. Better visualisation of the created models, with a possible use of VR;
  3. Better understanding of the data and the processes that are measured;
  4. Enabling new information and knowledge discovery through meaningful interpretation of the models.

See also some of the related papers:

Kasabov, N. K. (2014). NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain dataNeural Networks52, 62-76.

Kasabov, N., Scott, N. M., Tu, E., Marks, S., Sengupta, N., Capecci, E., Othman, M., Gholoami Doborjeh, M., Murli, N., Hartono, R., Espinosa-Ramos, J. I., Zhou, L., Alvi, F., Wang, G., Taylor, D., Feigin, V., Gulyaev, S., Mahmoud, M., Hou, Z. G., Yang, J. (2016). Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: design methodology and selected applicationsNeural Networks78, 1-14.


R&D System

For this project, an R&D system has been developed based on NeuCube. The system can be obtained for R&D subject to licensing agreement.


Developer

The developers of this project are:

Associate Professor Wei Qi Yan

Wei Cui