Journal of Computer Science and Technology https://journal.info.unlp.edu.ar/JCST <p>The Journal of Computer Science and Technology (JCS&amp;T) is semiannual, open access, and peer-reviewed International Journal promoting the dissemination of original research and technological implementation experiences in the areas of Computer Science, Engineering, and Information Systems. JCS&amp;T is aimed at the general public interested in research, development and applications in the previous mentioned areas, providing a common place to encourage interaction among these community members.</p> <p>Specific topics of interest include: Intelligent Systems; Artificial Intelligence; Semantic Web; Algorithms; Cluster, Grid, Cloud &amp; Accelerator Computing; Fault-Tolerant System; Parallel Architectures; Computer Graphics; Virtual Reality; Human-Computer Interfaces; Image Processing; Technology &amp; Education; E-Learning; M-Learning; Software Engineering; Quality and Software Metrics; Real-Time Systems; Signal Processing; Data Bases; Data Mining; Big Data; Operating Systems; Network Architecture and Configuration; Security; Industrial Systems; Robotics; E-Government; Modelling &amp; Simulation and Computer Science Applications.</p> en-US <header class="entry-header"></header> <div class="entry-content">&nbsp; <h4><strong>Copyright and Licensing</strong></h4> <p>Articles accepted for publication will be licensed under the&nbsp;<a href="https://creativecommons.org/licenses/by-nc/4.0/" target="_blank" rel="noopener">Creative Commons BY-NC</a>. Authors must sign a non-exclusive distribution&nbsp;<a href="http://journal.info.unlp.edu.ar/public/journals/2/JCST-Agreement.pdf">agreement</a>&nbsp;after article acceptance.</p> </div> journal@lidi.info.unlp.edu.ar (Editorial Team) journal@lidi.info.unlp.edu.ar (Editorial Team) Thu, 29 Oct 2020 20:45:22 +0000 OJS 3.2.1.1 http://blogs.law.harvard.edu/tech/rss 60 Experimental Framework to Simulate Rescue Operations after a Natural Disaster https://journal.info.unlp.edu.ar/JCST/article/view/1432 <p style="margin-bottom: 0cm; line-height: 100%;">Computational simulation is a powerful tool for performance evaluation of computational systems. It is useful to make capacity planning of data center clusters, to obtain profiling reports of software applications and to detect bottlenecks. It has been used in different research areas like large scale Web search engines, natural disaster evacuations, computational biology, human behavior and tendency, among many others. However, properly tuning the parameters of the simulators, defining the scenarios to be simulated and collecting the data traces is not an easy task. It is an incremental process which requires constantly comparing the estimated metrics and the flow of simulated actions against real data. In this work, we present an experimental framework designed for the development of large scale simulations of two applications used upon the occurrence of a natural disaster strikes. The first one is a social application aimed to register volunteers and manage emergency campaigns and tasks. The second one is a benchmark application a data repository named MongoDB. The applications are deployed in a distributed platform which combines different technologies like a Proxy, a Containers Orchestrator, Containers and a NoSQL Database. We simulate both applications and the architecture platform. We validate our simulators using real traces collected during simulacrums of emergency situations.</p> Luis Veas Castillo, Gabriel Ovando-Leon, Gabriel Astudillo, Veronica Gil-Costa, Mauricio Marín Copyright (c) 2020 Luis Veas Castillo, Juan Ovando, Gabriel Astudillo, Veronica Gil-Costa, Mauricio Marín https://journal.info.unlp.edu.ar/JCST/article/view/1432 Thu, 29 Oct 2020 00:00:00 +0000 Data Science & Engineering into Food Science: A novel Big Data Platform for Low Molecular Weight Gelators’ Behavioral Analysis https://journal.info.unlp.edu.ar/JCST/article/view/1435 <p>The objective of this article is to introduce a comprehensive<br>end-to-end solution aimed at enabling the application<br>of state-of-the-art Data Science and Analytic<br>methodologies to a food science related problem. The<br>problem refers to the automation of load, homogenization,<br>complex processing and real-time accessibility to<br>low molecular-weight gelators (LMWGs) data to gain<br>insights into their assembly behavior, i.e. whether a<br>gel can be mixed with an appropriate solvent or not.<br>Most of the work within the field of Colloidal and<br>Food Science in relation to LMWGs have centered on<br>identifying adequate solvents that can generate stable<br>gels and evaluating how the LMWG characteristics can<br>affect gelation. As a result, extensive databases have<br>been methodically and manually registered, storing<br>results from different laboratory experiments. The<br>complexity of those databases, and the errors caused<br>by manual data entry, can interfere with the analysis<br>and visualization of relations and patterns, limiting the<br>utility of the experimental work.<br>Due to the above mentioned, we have proposed a<br>scalable and flexible Big Data solution to enable the<br>unification, homogenization and availability of the data<br>through the application of tools and methodologies.<br>This approach contributes to optimize data acquisition<br>during LMWG research and reduce redundant data processing<br>and analysis, while also enabling researchers<br>to explore a wider range of testing conditions and push<br>forward the frontier in Food Science research.</p> Verónica Cuello, Gonzalo Zarza, Maria Corradini, Michael Rogers Copyright (c) 2020 Verónica Cuello, Gonzalo Zarza, Maria Corradini, Michael Rogers https://journal.info.unlp.edu.ar/JCST/article/view/1435 Thu, 29 Oct 2020 00:00:00 +0000 First steps towards a dynamical model for forest fire behaviour in Argentinian landscapes https://journal.info.unlp.edu.ar/JCST/article/view/1358 <p>We developed a Reaction Diffusion Convection (RDC) model for forest fire propagation coupled to a visualization platform with several functionalities requested by local firefighters. The dynamical model aims to understand the key mechanisms driving fire propagation in the Patagonian region. We'll show in this work the first tests considering combustion and diffusion in artificial landscapes. The simulator, developed in CUDA/OpenGL, integrates several layers including topography, weather, and fuel data. It allows to visualize the fire propagation and also to interact with the user in simulation time. <br>The Fire Weather Index (FWI), extensively used in Argentina to support operative preventive measures for forest fires management, was also coupled to our visualization platform. This additional functionality allows the user to visualize on the landscape the fire risks, that are closely related to FWI, for Northwest Patagonian forests in Argentina.</p> Monica Denham, Karina Laneri, Viviana Zimmerman, Sigfrido Waidelich Copyright (c) 2020 Monica Denham, Karina Laneri, Viviana Zimmerman, Sigfrido Waidelich https://journal.info.unlp.edu.ar/JCST/article/view/1358 Thu, 29 Oct 2020 00:00:00 +0000 Intelligent data analysis of the influence of COVID-19 on the stock market using Case Based Reasoning https://journal.info.unlp.edu.ar/JCST/article/view/1433 <p>Starting with the differences between forecasting and prediction and going deeper into prediction, a knowledge-based model is presented. The evolution of the stocks markets are analyzed, as well as how the epidemics and pandemics prior to the stock markets have affected them and how it is currently being affected by covid-19. The defined model is applied to a use case using Case-Based Reasoning (CBR): it makes an analogy between the 2008 crisis with the covid-19 crisis in 2020 to predict whether the stock markets will take more or less time to recover.</p> Antonio Lorenzo Sánchez, Jose Olivas Copyright (c) 2020 Antonio Lorenzo Sánchez https://journal.info.unlp.edu.ar/JCST/article/view/1433 Thu, 29 Oct 2020 00:00:00 +0000 An analysis of k-mer frequency features with SVM and CNN for viral subtyping classification https://journal.info.unlp.edu.ar/JCST/article/view/1434 <p>Viral subtyping classification is very relevant for the appropriate diagnosis and treatment of illnesses. The most used tools are based on alignment-based methods, nevertheless, they are becoming too slow with the increase of genomic data. For that reason, alignment-free methods have emerged as an alternative. In this work, we analyzed four alignment-free algorithms: two methods use k-mer frequencies (Kameris and Castor-KRFE); the third method used a frequency chaos game representation of a DNA with CNNs; finally the last one, process DNA sequences as a digital signal (ML-DSP). From the comparison, Kameris and Castor-KRFE outperformed the rest, followed by the method based on CNNs.</p> Vicente Enrique Machaca Arceda Copyright (c) 2020 Vicente Enrique Machaca Arceda https://journal.info.unlp.edu.ar/JCST/article/view/1434 Thu, 29 Oct 2020 00:00:00 +0000 Analysis, Deployment and Integration of Platforms for Fog Computing https://journal.info.unlp.edu.ar/JCST/article/view/1436 <p>In IoT applications, data capture in a sensor network can generate a large flow of information between the nodes and the cloud, affecting response times and device complexity but, above all, increasing costs. Fog computing refers to the use of pre-processing tools to improve local data management and communication with the cloud. This work presents an analysis of the features that platforms implementing fog computing solutions should have. Additionally, an experimental work integrating two specific platforms used for controlling devices in a sensor network, processing the generated data, and communicating with the cloud is presented.</p> Joaquín de Antueno, Santiago Medina, Laura De Giusti, Armando De Giusti Copyright (c) 2020 Joaquín de Antueno, Santiago Medina, Laura De Giusti, Armando De Giusti https://journal.info.unlp.edu.ar/JCST/article/view/1436 Thu, 29 Oct 2020 00:00:00 +0000 Sketching enactive interactions https://journal.info.unlp.edu.ar/JCST/article/view/1445 <p>The continuous development of interactive technologies and the greater understanding of body importance in cognitive processes has driven HCI research, specifically on interaction design, to solve the user’s relationship with a multitude of beyond desktop devices. This has opened new challenges for having processes, methods and tools to achieve appropriate user experiences. Insofar as new devices and systems involve the body and social aspects of the human being, the consideration of paradigms, theories and support models that exceed the selection of navigation nodes and the appropriate visual organization of widgets and screens becomes more relevant. The interaction design must take care not only to get the product built properly but also to build the right product. This thesis is at the crossroads of three themes: the design of interactive systems that combine a foot in the digital and one in the physical, the theories of embodied and enactive cognition and the creative practices supported by sketching, in particular the processes of generation, evaluation and communication of interaction design ideas. This work includes contributions of different character. An in-depth study of the theories on embodied and enactive cognition, the design of interaction with digital devices and sketching as a basic tool of creative design is carried out. Based on this analysis of the existing literature and with a characterization of the enactive practice of enactive interactions based on ethnomethodological studies, a framework is proposed to conceptually organize this practice and a support tool for that activity conceived as a creative composition. The contributions are discussed, and possible lines of future work are considered.</p> <p><span style="left: 165.355px; top: 807.9px; font-size: 18.1818px; font-family: sans-serif; transform: scaleX(0.984235);">&nbsp;</span></p> Andres Rodriguez Copyright (c) 2020 Andres Rodriguez https://journal.info.unlp.edu.ar/JCST/article/view/1445 Thu, 29 Oct 2020 00:00:00 +0000 SEDAR: Soft Error Detection and Automatic Recovery in High Performance Computing Systems https://journal.info.unlp.edu.ar/JCST/article/view/1453 <p class="JCST-Abstract" style="margin-bottom: 6.0pt; text-align: justify;"><span lang="EN-US">&nbsp;</span></p> <p class="JCST-Abstract" style="margin-bottom: 6.0pt; text-align: justify;"><span lang="EN-US">Reliability and fault tolerance have become aspects of growing relevance in the field of HPC, due to the increased probability that faults of different kinds will occur in these systems. This is fundamentally due to the increasing complexity of the processors, in the search to improve performance, which leads to a rise in the scale of integration and in the number of components that work near their technological limits, being increasingly prone to failures. Another factor that affects is the growth in the size of parallel systems to obtain greater computational power, in terms of number of cores and processing nodes.</span></p> <p class="JCST-Abstract" style="margin-bottom: 6.0pt; text-align: justify;"><span lang="EN-US">As applications demand longer uninterrupted computation times, the impact of faults grows, due to the cost of relaunching an execution that was aborted due to the occurrence of a fault or concluded with erroneous results. Consequently, it is necessary to run these applications on highly available and reliable systems, requiring strategies capable of providing detection, protection and recovery against faults.</span></p> <p class="JCST-Abstract" style="margin-bottom: 6.0pt; text-align: justify;"><span lang="EN-US">In the next years it is planned to reach Exa-scale, in which there will be supercomputers with millions of processing cores, capable of performing on the order of 1018 operations per second. This is a great window of opportunity for HPC applications, but it also increases the risk that they will not complete their executions. Recent studies show that, as systems continue to include more processors, the Mean Time Between Errors decreases, resulting in higher failure rates and increased risk of corrupted results; large parallel applications are expected to deal with errors that occur every few minutes, requiring external help to progress efficiently. Silent Data Corruptions are the most dangerous errors that can occur, since they can generate incorrect results in programs that appear to execute correctly. Scientific applications and large-scale simulations are the most affected, making silent error handling the main challenge towards resilience in HPC. In message passing applications, a silent error, affecting a single task, can produce a pattern of corruption that spreads to all communicating processes; in the worst case scenario, the erroneous final results cannot be detected at the end of the execution and will be taken as correct.</span></p> <p class="JCST-Abstract" style="margin-bottom: 6.0pt; text-align: justify;"><span lang="EN-US">Since scientific applications have execution times of the order of hours or even days, it is essential to find strategies that allow applications to reach correct solutions in a bounded time, despite the underlying failures. These strategies also prevent energy consumption from skyrocketing, since if they are not used, the executions should be launched again from the beginning. However, the most popular parallel programming models used in supercomputers lack support for fault tolerance.</span></p> Diego Montezanti Copyright (c) 2020 Diego Montezanti https://journal.info.unlp.edu.ar/JCST/article/view/1453 Thu, 29 Oct 2020 00:00:00 +0000