Background Subtraction for Time of Flight Imaging

Authors

  • Javier Giacomantone Institute of Research in Computer Science - School of Computer Science - University of La Plata, Argentina
  • María Lucía Violini Institute of Research in Computer Science - School of Computer Science - University of La Plata, Argentina
  • Luciano Lorenti Institute of Research in Computer Science - School of Computer Science - University of La Plata, Argentina

DOI:

https://doi.org/10.24215/16666038.17.e18

Keywords:

industrial TOF cameras, machine vision, pattern recognition, support vector machines

Abstract

A time of flight camera provides two types of images simultaneously, depth and intensity. In this paper a computational method for background subtraction, combining both images or fast sequences of images, is proposed. The background model is based on unbalanced or semi-supervised classifiers, in particular support vector machines. A brief review of one class support vector machines is first given. A method that combines the range and intensity data in two operational modes is then provided. Finally, experimental results are presented and discussed.

Downloads

Download data is not yet available.

References

[1] P. Rajan and S. Prakash, “Moving foreground object detection and background subtraction using
adaptive-K GMM: A survey,” International Journal of Advance Research in Computer Science
and Management Studies, vol. 2, pp. 300–308, 2014.
[2] M. Piccardi, “Background subtraction techniques: a review,” in IEEE International Conference on System. Man and Cybernetics, pp. 3099–3104, 2004.
[3] T. Bouwmans, L. Davis, J. Gonzalez, M. Piccardi, and C. Shan, “Special issue on background
modeling for foreground detection in real-world dynamic scenes,” Machine Vision and Applications, vol. 25, pp. 1101–1103, 2014.
[4] A. Vacavant, L. Tougne, and T. Chateau, “Special section on background models comparison,” Computer Vision and Image Understanding, vol. 122, pp. 1–202, 2014.
[5] C. Stauffer and W. L. Grimson, “Adaptive background mixture models for real-time tracking,” Computer Vision and Pattern Recognition, pp. 2246–2252, 1999.
[6] R. D. Cajote et al., “Framework of surveillance video analysis and transmission system using
background modeling and MIMO-OFDM,” in IEEE International Conference on Digital Signal Processing, pp. 1071–1075, 2015.
[7] F. Chiabrando, R. Chiabrando, D. Piatti, and F. Rinaudo, “Sensors for 3d imaging: Metric evaluation and calibration of a ccd/cmos time-offlight camera,” Sensors, vol. 9, pp. 10080–10096, 2009.
[8] A. Kolb, E. Barth, R. Koch, and R. Larsen, “Time-of-flight sensors on computer graphics,” in Eurographics, 2009.
[9] S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” Sensors, vol. 11, no. 9, pp. 1917–1926, 2011.
[10] T. Oggier et al., “Novel pixel architecture with inherent background suppression for 3d time-offlight imaging,” in Videometrics VIII, 2005.
[11] A. Glazer, M. Lindenbaum, and S. Markovitch, “One-class background model,” Lecture Notes in Computer Science, vol. 7728, pp. 301–307, 2012.
[12] S. H. Cho et al., “Background subtraction based object extraction for time-of-flight sensor,” in IEEE Global Conference on Consumer Electronics, pp. 48–49, 2013.
[13] J. Giacomantone, L. Violini, L. Lorenti, and A. D. Giusti, “Supresión de segundo plano en imágenes de tiempo de vuelo,” in XXII Congreso Argentino de Ciencias de la Computación, pp. 1064–1073, 2016.
[14] V. Vapnik, “An overview of statistical learning theory,” IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 988–999, 1999.
[15] D. J. Tax and R. P. W. Duin, “Support vector data description,” Machine Learning, vol. 54, pp. 45–
66, 2004.
[16] B. Schlkopf, J. C. Platt, J. Shawe-Taylor, A. J. Samola, and R. C. Williamson, “Estimating the support of a high dimensional distribution,” Neural Computation, vol. 13, no. 7, pp. 1443–1471, 2001.
[17] F. Chiabrando, D. Piatti, and F. Rinaudo, “Sr4000 tof camera: Further experimental tests and
first applications to metric surveys,” in V Symposium on Remote Sensing and Spatial Information
Sciences, pp. 149–154, 2010.
[18] M. Lindner, I. Schiller, A. Kolb, and R. Koch, “Time-of-flight sensor calibration for accurate range sensing,” Computer Vision and Image Understanding, vol. 114, no. 12, pp. 1318–1328, 2010.
[19] M. Reynolds et al., “Capturing time-of-flight data with confidence,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 945–952, 2011.
[20] F. Lenzen et al., “Denoising strategies for timeof-flight data,” Lecture Notes in Computer Science, vol. 8200, pp. 25–45, 2013.
[21] A. Sabov and J. Kruguer, “Identification and correction of flying pixels in range camera data,” in
Computer Graphics, pp. 135–145, 2010.
[22] H. Rapp, M. Frank, F. Hamprecht, and B. Jahne, “A theoretical and experimental investigation of
the systematic errors and statistical uncertainties of time of flight cameras,” Intelligent Systems Technologies and Applications, vol. 5, pp. 402–413, 2008.
[23] H. Schöner, “Denoising 3d images from time-offlight cameras using extended anisotropic diffusion,” SPIE Newsroom, 2012.

Downloads

Published

2017-10-01

How to Cite

Giacomantone, J., Violini, M. L., & Lorenti, L. (2017). Background Subtraction for Time of Flight Imaging. Journal of Computer Science and Technology, 17(02), e18. https://doi.org/10.24215/16666038.17.e18

Issue

Section

Invited Articles