DECSI - Artigos publicados em periódicos
URI Permanente para esta coleção
Navegar
Navegando DECSI - Artigos publicados em periódicos por Data de Publicação
Agora exibindo 1 - 20 de 71
Resultados por página
Opções de Ordenação
Item Uso de redes neurais artificais na predição de valores genéticos para peso aos 205 dias em bovinos da raça Tabapuã.(2012) Ventura, Ricardo Vieira; Silva, Martinho de Almeida e; Medeiros, Talles Henrique de; Dionello, Nelson Jose Laurino; Madalena, Fernando Enrique; Fridrich, A. B.; Valente, Bruno Dourado; Santos, Glaucyana Gouvêa dos; Freitas, Luciana Salles de; Wenceslau, Raphael Rocha; Felipe, Vivian Paula Silva; Corrêa, Gerusa da Silva SallesDados de 19240 animais Tabapuã, provenientes de 152 fazendas localizadas em diversos estados brasileiros, nascidos entre 1976 e 1995, foram utilizados para predição do valor genético do peso aos 205 dias de idade (VG_P205) por meio de redes neurais artificiais (RNAs) e usando o algoritmo LM Levenberg Marquardt para treinamento dos dados de entrada. Por se tratar de rede com aprendizado supervisionado, foram utilizados, como saída desejada, os valores genéticos preditos pelo BLUP para a característica P205. Os valores genéticos do P205 obtidos pela RNA e os preditos pelo BLUP foram altamente correlacionados. A ordenação dos valores genéticos do P205 oriundos das RNAs e os valores preditos pelo BLUP (VG_P205_RNA) sugeriram que houve variação na classificação dos animais, indicando riscos no uso de RNAs para avaliação genética dessa característica. Inserções de novos animais necessitam de novo treinamento dos dados, sempre dependentes do BLUP.Item Ambiguity and context-dependent overloading.(2013) Ribeiro, Rodrigo Geraldo; Figueiredo, Carlos Camarão deThis paper discusses ambiguity in the context of languages that support context-dependent overloading, such as Haskell. A type system for a Haskell-like programming language that supports context-dependent overloading and follow the Hindley-Milner approach of providing contextfree type instantiation, allows distinct derivations of the same type for ambiguous expressions. Such expressions are usually rejected by the type inference algorithm, which is thus not complete with respect to the type system. Also, Haskell’s open world approach considers a definition of ambiguity that does not conform to the existence of two or more distinct type system derivations for the same type. The article presents an alternative approach, where the standard definition of ambiguity is followed. A type system is presented that allows only context-dependent type instantiation, enabling only one type to be derivable for each expression in a given typing context: the type of an expression can be instantiated only if required by the program context where the expression occurs. We define a notion of greatest instance type for each occurrence of an expression, which is used in the definition of a standard dictionary-passing semantics for core Haskell based on type system derivations, for which coherence is trivial. Type soundness is obtained as a result of disallowing all ambiguous expressions and all expressions involving unsatisfiability in the use of overloaded names. Following the standard definition of ambiguity, satisfiability is tested—i.e., “the world is closed” —if only if overloading is (or should have been) resolved, that is, if and only if there exist unreachable variables in the constraints on types of expressions. Nowadays, satisfiability is tested in Haskell, in the presence of multiparameter type classes, only upon the presence of functional dependencies or an alternative mechanism that specifies conditions for closing the world, and that may happen when there exist or not unreachable type variables in constraints. The satisfiability trigger condition is then given automatically, by the existence of unreachable variables in constraints, and does not need to be specified by programmers, using an extra mechanism.Item Classifying unlabeled short texts using a fuzzy declarative approach.(2013) Romero, Francisco P.; Iranzo, Pascual Julián; Soto, Andrés; Satler, Mateus Ferreira; Casero, Juan GallardoWeb 2.0 provides user-friendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a pre-classified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named Bousi*Prolog, for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.Item Temporal synchronization in mobile sensor networks using image sequence analysis.(2014) Brito, Darlan Nunes de; Pádua, Flávio Luis Cardeal; Pereira, Guilherme A. S.This paper addresses the problem of estimating the temporal synchronization inmobile sensors’ networks, by using image sequence analysis of their corresponding scene dynamics. Unlike existing methods, which are frequently based on adaptations of techniques originally designed for wired networks with static topologies, or even based on solutions specially designed for ad hoc wireless sensor networks, but that have a high energy consumption and a low scalability regarding the number of sensors, this work proposes a novel approach that reduces the problem of synchronizing a general number N of sensors to the robust estimation of a single line in RN+1. This line captures all temporal relations between the sensors and can be computed without any prior knowledge of these relations. It is assumed that (1) the network’s mobile sensors cross the field of view of a stationary calibrated camera that operates with constant frame rate and (2) the sensors trajectories are estimated with a limited error at a constant sampling rate, both in the world coordinate system and in the camera’s image plane. Experimental results with real-world and synthetic scenarios demonstrate that our method can be successfully used to determine the temporal alignment in mobile sensor networks.Item The optimal design of HTS devices.(2014) Das, Rajeev; Oliveira, Fernando Bernardes de; Guimarães, Frederico Gadelha; Lowther, David A.In the design of High Temperature Superconductor (HTS) based electromagnetic devices, some of the major challenges include AC loss reduction, minimization of heat leakage, and reduction of the amount of HTS material used in order to decrease cost. This paper considers a computer model of HTS based leads involving a multiphysics scenario that takes into account the electromagnetic and thermal behavior of the system. The work provides an optimum solution by applying an approach based on Multi Objective Optimization (MOO). The proposed framework provides a technique to optimize effectively HTS leads, which not only deals with the non-linear aspect of HTS materials but also includes a multiphysics environment.Item Workload characterization of a location-based social network.(2014) Lins, Theo Silva; Pereira, Adriano César Machado; Souza, Fabrício Benevenuto deRecently, there has been a large popularization of location-based social networks, such as Foursquare and Apontador, in which users can share their current locations, upload tips and make comments about places. Part of this popularity is due to facility access to the Internet through mobile devices with GPS. Despite the various efforts towards understanding characteristics of these systems, little is known about the access pattern of users in these systems. Providers of this kind of services need to deal with different challenges that could benefit of such understanding, such as content storage, performance and scalability of servers, personalization and service differentiation for users. This article aims at characterizing and modeling the patterns of requests that reach a server of a locationbased social network. To do that, we use a dataset obtained from Apontador, a Brazilian system with characteristics similar to Foursquare and Gowalla, where users share information about their locations and can navigate on existent system locations. As results, we identified models that describe unique characteristics of the user sessions on this kind of system, patterns in which requests arrive on the server as well as the access profile of users in the system.Item An integer programming approach to the multimode resource-constrained multiproject scheduling problem.(2015) Toffolo, Túlio Ângelo Machado; Santos, Haroldo Gambini; Carvalho, Marco Antonio Moreira de; Araujo, Janniele Aparecida SoaresThe project scheduling problem (PSP) is the subject of several studies in computer science, mathematics, and operations research because of the hardness of solving it and its practical importance. This work tackles an extended version of the problem known as the multimode resourceconstrained multiproject scheduling problem. A solution to this problem consists of a schedule of jobs from various projects, so that the job allocations do not exceed the stipulated limits of renewable and nonrenewable resources. To accomplish this, a set of execution modes for the jobs must be chosen, as the jobs’ duration and amount of needed resources vary depending on the mode selected. Finally, the schedule must also consider precedence constraints between jobs. This work proposes heuristic methods based on integer programming to solve the PSP considered in the Multidisciplinary International Scheduling Conference: Theory and Applications (MISTA) 2013 Challenge. The developed solver was ranked third in the competition, being able to find feasible and competitive solutions for all instances and improving best known solutions for some problems.Item Efficiently computing the drainage network on massive terrains using external memory flooding process.(2015) Gomes, Thiago Luange; Magalhães, Salles Viana Gomes de; Andrade, Marcus Vinícius Alvim; Franklin, W. Randolph; Pena, Guilherme de CastroWe present EMFlow, a very efficient algorithm and its implementation, to compute the drainage network (i.e. the flow direction and flow accumulation) on huge terrains stored in external memory. Its utility lies in processing the large volume of high resolution terrestrial data newly available, which internal memory algorithms cannot handle efficiently. The flow direction is computed using an adaptation of our previous method RWFlood that uses a flooding process to quickly remove internal depressions or basins. Flooding, proceeding inward from the outside of the terrain, works oppositely to the common method of computing downhill flow from the peaks. To reduce the number of I/O operations, EMFlow adopts a new strategy to subdivide the terrain into islands that are processed separately. The terrain cells are grouped into blocks that are stored in a special data structure managed as a cache memory. EMFlow’s execution time was compared against the two most recent and most efficient published methods: TerraFlow and r.watershed.seg. It was, on average, 25 and 110 times faster than TerraFlow and r.watershed.seg respectively. Also, EMFlow could process larger datasets. Processing a 50000 × 50000 terrain on a machine with 2GB of internal memory took about 4500 seconds, compared to 87000 seconds for TerraFlow while r.watershed.seg failed on terrains larger than 15000×15000. On very small, say1000×1000 terrains, EMFlow takes under a second, compared to 6 and 20 seconds in r.watershed.seg and TerraFlow respectively. So EMFlow could be a component of a future interactive system where a user could modify terrain and immediately see the new hydrographyItem An on-the-fly grammar modification mechanism for composing and defining extensible languages.(2015) Reis, Leonardo Vieira dos Santos; Iorio, Vladimir Oliveira Di; Bigonha, Roberto da SilvaAdaptable Parsing Expression Grammar (APEG) is a formal method for defining the syntax of programming languages. It provides an on-the-fly mechanism to perform modifications of the syntax of the language during parsing time. The primary goal of this dynamic mechanism is the formal specification and the automatic parser generation for extensible languages. In this paper, we show how APEG can be used for the definition of the extensible languages SugarJ and Fortress, clarifying many aspects of the syntax of these languages. We also show that the mechanism for on-the-fly modification of syntax rules can be useful for defining grammars in a modular way, implementing almost all types of language composition in the context of specification of extensible languages.Item Análise comparativa de detectores e descritores de características locais em imagens no âmbito do problema de autocalibração de câmeras.(2016) Brito, Darlan Nunes de; Pádua, Flávio Luis Cardeal; Lopes, Aldo Peres Campos e; Dalip, Daniel HasanEste trabalho apresenta uma análise comparativa de diferentes métodos do estado da arte para detecção e descrição de características locais em imagens, com o objetivo de solucionar de forma robusta e eficiente o problema de autocalibração de câmeras. Para atingir esse objetivo, é essencial a utilização de métodos detectores e descritores eficazes, uma vez que a correspondência robusta de características em um conjunto de imagens sucessivas sujeitas a uma ampla variedade de distorções afins e mudanças no ponto de vista 3D da cena, é crucial para a exatidão dos cálculos dos parâmetros da câmera. Muito embora diversos detectores e descritores têm sido propostos na literatura, seus impactos no processo de autocalibração de câmeras não foram ainda devidamente estudados. Nesse trabalho de análise comparativa, utilizam-se como critérios de qualidade da autocalibração os erros: epipolar, de reprojeção e reconstrução, bem como os tempos de execução dos métodos. Os resultados experimentais demonstram que detectores e descritores binários de características (ORB, BRISK e FREAK) e de ponto flutuante (SIFT e SURF) apresentam erros de reprojeção e reconstrução equivalentes. Considerando-se, porém, o menor custo computacional dos métodos binários, recomenda-se, fortemente, o uso destes em soluções de problemas de autocalibração de câmeras.Item Ambiguity and constrained polymorphism.(2016) Figueiredo, Carlos Camarão de; Figueiredo, Lucília Camarão de; Ribeiro, Rodrigo GeraldoThis paper considers the problem of ambiguity in Haskell-like languages. Overloading resolution is characterized in the context of constrained polymorphism by the presence of unreachable variables in constraints on the type of the expression. A new definition of ambiguity is presented, where existence of more than one instance for the constraints on an expression type is considered only after overloading resolution. This introduces a clear distinction between ambiguity and overloading resolution, makes ambiguity more intuitive and independent from extra concepts, such as functional dependencies, and enables more programs to type-check as fewer ambiguities arise. The paper presents a type system and a type inference algorithm that includes: a constraint-set satisfiability function, that determines whether a given set of constraints is entailed or not in a given context, focusing on issues related to decidability, a constraint-set improvement function, for filtering out constraints for which overloading has been resolved, and a context-reduction function, for reducing constraint sets according to matching instances. A standard dictionary-style semantics for core Haskell is also presented.Item Late acceptance hill-climbing for high school timetabling.(2016) Fonseca, George Henrique Godim da; Santos, Haroldo Gambini; Carrano, Eduardo GontijoThe application of the Late Acceptance HillClimbing (LAHC) to solve the High School Timetabling Problem is the subject of this manuscript. The original algorithm and two variants proposed here are tested jointly with other state-of-art methods to solve the instances proposed in the Third International Timetabling Competition. Following the same rules of the competition, the LAHC-based algorithms noticeably outperformed the winning methods. These results, and reports from the literature, suggest that the LAHC is a reliable method that can compete with the most employed local search algorithms.Item Evaluation of interest point matching methods for projective reconstruction of 3d scenes.(2016) Brito, Darlan Nunes de; Nunes, Cristiano Fraga Guimarães; Pádua, Flávio Luis Cardeal; Lacerda, Anisio MendesThis work evaluates the application of different state-of-the-art methods for interest point matching, aiming the robust and efficient projective reconstruction of three-dimensional scenes. Projective reconstruction refers to the computation of the structure of a scene from images taken with uncalibrated cameras. To achieve this goal, it is essential the usage of an effective point matching algorithm. Even though several point matching methods have been proposed in the literature, their impacts in the projective reconstruction task have not yet been carefully studied. Our evaluation uses as criterion the estimated epipolar, reprojection and reconstruction errors, as well as the running times of the algorithms. Specifically, we compare five different techniques: SIFT, SURF, ORB, BRISK and FREAK. Our experiments show that binary algorithms such as, ORB and BRISK, are so accurate as float point algorithms like SIFT and SURF, nevertheless, with smaller computational cost.Item Watershed-ng : an extensible distributed stream processing framework.(2016) Rocha, Rodrigo; Hott, Bruno; Dias, Vinícius; Ferreira, Renato; Meira Júnior, Wagner; Guedes Neto, Dorgival OlavoMost high-performance data processing (a.k.a. big data) systems allow users to express their computation using abstractions (like MapReduce), which simplify the extraction of parallelism from applications. Most frameworks, however, do not allow users to specify how communication must take place: That element is deeply embedded into the run-time system abstractions, making changes hard to implement. In this work, we describe Wathershed-ng, our re-engineering of the Watershed system, a framework based on the filter–stream paradigm and originally focused on continuous stream processing. Like other big-data environments, Watershed provided object-oriented abstractions to express computation (filters), but the implementation of streams was a run-time system element. By isolating stream functionality into appropriate classes, combination of communication patterns and reuse of common message handling functions (like compression and blocking) become possible. The new architecture even allows the design of new communication patterns, for example, allowing users to choose MPI, TCP, or shared memory implementations of communication channels as their problem demands. Applications designed for the new interface showed reductions in code size on the order of 50% and above in some cases. The performance results also showed significant improvements, because some implementation bottlenecks were removed in the re-engineering process.Item A survey on the geographic scope of textual documents.(2016) Monteiro, Bruno Rabello; Davis Junior, Clodoveu Augusto; Fonseca, FredRecognizing references to places in texts is needed in many applications, such assearch engines,loca- tion-based social media and document classification. In this paper we present a survey of methods and techniques for there cognition and identification of places referenced in texts. We discuss concept sand terminology, and propose a classification of the solutions given in the literature. We introduce a definition of the Geographic Scope Resolution (GSR) problem, dividing it in three steps: geoparsing, reference resolution, and grounding references. Solutions to the first two steps are organized according to the method used, and solutions to the third step are organized according to the type of out put produced. We found that it is difficult to compare existing solutions directly to one another, because they of ten create their own bench marking data, targeted to their own problem.Item DengueME : a tool for the modeling and simulation of dengue spatiotemporal dynamics.(2016) Lima, Tiago França Melo de; Lana, Raquel Martins; Carneiro, Tiago Garcia de Senna; Codeço, Cláudia Torres; Machado, Gabriel Souza; Ferreira, Lucas Saraiva; Medeiros, Líliam César de Castro; Davis Junior, Clodoveu AugustoThe prevention and control of dengue are great public health challenges for many countries, particularly since 2015, as other arboviruses have been observed to interact significantly with dengue virus. Different approaches and methodologies have been proposed and discussed by the research community. An important tool widely used is modeling and simulation, which help us to understand epidemic dynamics and create scenarios to support planning and decision making processes. With this aim, we proposed and developed DengueME, a collaborative open source platform to simulate dengue disease and its vector’s dynamics. It supports compartmental and individual-based models, implemented over a GIS database, that represent Aedes aegypti population dynamics, human demography, human mobility, urban landscape and dengue transmission mediated by human and mosquito encounters. A user-friendly graphical interface was developed to facilitate model configuration and data input, and a library of models was developed to support teaching-learning activities. DengueME was applied in study cases and evaluated by specialists. Other improvements will be made in future work, to enhance its extensibility and usability.Item Multi-objective decision in machine learning.(2016) Medeiros, Talles Henrique de; Rocha, Honovan Paz; Torres, Frank Sill; Takahashi, Ricardo Hiroshi Caldeira; Braga, Antônio de PáduaThiswork presents a novel approach for decisionmaking for multi-objective binary classification problems. The purpose of the decision process is to select within a set of Pareto-optimal solutions, one model that minimizes the structural risk (generalization error). This new approach utilizes a kind of prior knowledge that, if available, allows the selection of a model that better represents the problem in question. Prior knowledge about the imprecisions of the collected data enables the identification of the region of equivalent solutions within the set of Pareto-optimal solutions. Results for binary classification problems with sets of synthetic and real data indicate equal or better performance in terms of decision efficiency compared to similar approaches.Item SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods.(2016) Ribeiro, Filipe Nunes; Araújo, Matheus; Gonçalves, Pollyanna; Gonçalves, Marcos André; Souza, Fabrício Benevenuto deIn the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.Item A cooperative coevolutionary algorithm for the multi-depot vehicle routing problem.(2016) Oliveira, Fernando Bernardes de; Enayatifar, Rasul; Sadaei, Hossein Javedani; Guimarães, Frederico Gadelha; Potvin, Jean YvesThe Multi-Depot Vehicle Routing Problem (MDVRP) is an important variant of the classical Vehicle Routing Problem (VRP), where the customers can be served from a number of depots. This paper introduces a cooperative coevolutionary algorithm to minimize the total route cost of the MDVRP. Coevolutionary algorithms are inspired by the simultaneous evolution process involving two or more species. In this approach, the problem is decomposed into smaller subproblems and individuals from different populations are combined to create a complete solution to the original problem. This paper presents a problem decomposition approach for the MDVRP in which each subproblem becomes a single depot VRP and evolves independently in its domain space. Customers are distributed among the depots based on their distance from the depots and their distance from their closest neighbor. A population is associated with each depot where the individuals represent partial solutions to the problem, that is, sets of routes over customers assigned to the corresponding depot. The fitness of a partial solution depends on its ability to cooperate with partial solutions from other populations to form a complete solution to the MDVRP. As the problem is decomposed and each part evolves separately, this approach is strongly suitable to parallel environments. Therefore, a parallel evolution strategy environment with a variable length genotype coupled with local search operators is proposed. A large number of experiments have been conducted to assess the performance of this approach. The results suggest that the proposed coevolutionary algorithm in a parallel environment is able to produce high-quality solutions to the MDVRP in low computational time.Item A comparison between cost optimality and return on investment forenergy retrofit in buildings - a real options perspective.(2016) Tadeu, Sérgio Fernando; Alexandre, Rafael Frederico; Tadeu, António J. B.; Antunes, Carlos Henggeler; Simões, Nuno A. V.; Silva, Patrícia Pereira datEuropean Union (EU) regulations aim to ensure that the energy performance of buildings meets thecost-optimality criteria for energy efficiency measures. The methodological framework proposed in EUDelegated Regulation 244 is addressed to national authorities (not investors); the optimal cost level iscalculated to develop regulations applicable at domestic level. Despite the complexity and the large num-ber of possible combinations of economically viable efficiency measures, the real options for improvingenergy performance available to decision makers in building retrofit can be established. Our study con-siders a multi-objective optimization approach to identify the minimum global cost and primary energyneeds of 154,000 combinations of energy efficiency measures. The proposed model is solved by the NSGA-II multi-objective evolutionary algorithm. As a result, the cost-optimal levels and a return on investmentapproach are compared for a set of suitable solutions for a reference building. Eighteen combinations ofretrofit measures are selected and an analysis of the influence of real options on investments is proposed.We show that a sound methodological approach to determining the advantages of this type of investmentshould be offered so that Member States can provide valuable information and ensure that the minimumrequirements are profitable to most investors.