Handshape recognition for Argentinian Sign Language using ProbSom
Automatic sign language recognition is an important topic within the areas of human-computer interaction and machine learning. On the one hand, it poses a complex challenge that requires the intervention of various knowledge areas, such as video processing, image processing, intelligent systems and linguistics. On the other hand, robust recognition of sign language could assist in the translation process and the integration of hearingimpaired people. This paper offers two main contributions: first, the creation of a database of handshapes for the Argentinian Sign Language (LSA), which is a topic that has barely been discussed so far. Secondly, a technique for image processing, descriptor extraction and subsequent handshape classification using a supervised adaptation of self-organizing maps that is called ProbSom. This technique is compared to others in the state of the art, such as Support Vector Machines (SVM), Random Forests, and Neural Networks. The database that was built contains 800 images with 16 LSA conjurations, and is a first step towards building a comprehensive database of Argentinian signs. The ProbSom-based neural classifier, using the proposed descriptor, achieved an accuracy rate above 90%.
 C. Estrebou, L. Lanzarini, and W. Hasperue, “Voice recognition based on probabilistic SOM,” in Latinamerican Informatics Conference. CLEI 2010. Paraguay. October 2010., 2010.
 T. Kadir, R. Bowden, E. J. Ong, and A. Zisserman, “Minimal training, large lexicon, unconstrained sign language recognition,” in British Machine Vision Conference, 2004.
 N. Pugeault and R. Bowden, “Spelling it out: Real-time ASL fingerspelling recognition,” in 1st IEEE Workshop on Consumers Depth Cameras for Computer Vision, in conjunction with ICCV’2011, 2011.
 C. Zhang, X. Yang, and Y. Tian, “Histogram of 3d facets: A characteristic descriptor for hand gesture recognition,” in Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pp. 1–8, IEEE, 2013.
 L. Rioux-Maldague and P. Giguere, “Sign language fingerspelling classification from depth and color images using a deep belief network,” in Computer and Robot Vision (CRV), 2014 Canadian Conference on, pp. 92–97, IEEE, 2014.
 A. Roussos, S. Theodorakis, V. Pitsikalis, and P. Maragos, “Hand tracking and affine shape-appearance handshape sub-units in continuous sign language recognition,” in Trends and Topics in Computer Vision - ECCV 2010 Workshops, Heraklion, Crete, Greece, September 10-11, 2010, Revised Selected Papers, Part I, pp. 258–272, 2010.
 H. Cooper, E.-J. Ong, N. Pugeault, and R. Bowden, “Sign language recognition using sub-units,” Journal of Machine Learning Research, vol. 13, pp. 2205–2231, Jul 2012.
 A. Gangopadhyay, O. Chatterjee, and A. Chatterjee, “Hand shape based biometric authentication system using radon transform and collaborative representation based classification,” in Image Information Processing (ICIIP), 2013 IEEE Second International Conference on, pp. 635–639, Dec 2013.
 X. Zhu and K. K. Wong, “Single-frame hand gesture recognition using color and depth kernel descriptors,” in Pattern Recognition (ICPR), 2012 21st International Conference on, pp. 2989–2992, IEEE, 2012.
 L. Lanzarini, F. Ronchetti, C. Estrebou, L. Lens, and A. Fernandez Bariviera, “Face recognition based on fuzzy probabilistic SOM,” in IFSA World Congress and NAFIPS Annual Meeting (IFSA/NAFIPS), 2013 Joint, pp. 310–314, IEEE, 2013.
 T. Kohonen, “Self-organizing formation of topologically correct feature maps,” Biological Cybernetics, vol. 43, no. 1, pp. 59–69, 1982.
 A. Villamonte, F. Quiroga, F. Ronchetti, C. Estrebou, L. Lanzarini, P. Estelrrich, C. Estelrrich, and R. Giannechini, “A support system for the diagnosis of balance pathologies,” in Congreso Argentino de Ciencias de la Computación. CACIC 2014. Argentina. October 2014., 2014.