DECOM - Trabalhos apresentados em eventos

Navegar

Submissões Recentes

Agora exibindo 1 - 20 de 46
  • Item
    Scatter search based approach for the quadratic assignment problem
    (1997) Cung, Van Dat; Mautor, Thierry; Michelon, Philippe Yves Paul; Tavares, Andréa Iabrudi
    Scatter search is an evolutionary heuristic, proposed two decades ago, that uses linear combinations of a population subset to create new solutions. A special operator is used to ensure their feasibility and to improve their quality. In this paper, we propose a scatter search approach to the QAP problem. The basic method is extended with intensification and diversification stages and we present a procedure to generate good scattered initial solutions
  • Item
    An embedded converter from RS232 to universal serial bus.
    (2001) Zuquim, Ana Luiza de Almeida Pereira; Coelho Júnior, Claudionor José Nunes; Fernandes, Antônio Otávio; Oliveira, Marcos Pêgo; Tavares, Andréa Iabrudi
    Universal Serial Bus (USB) is a new personal computer interconnection protocol, developed to make the connection of peripheral devices to a computer easier and more efficient. It reduces the cost for the enduser, improves communication speed and supports simultaneous attachment of multiple devices (up to 127). RS232, in another hand, was designed to single device connection, but is one of the most used communication protocols. An embedded converter from RS232 to USB is very interesting, since it would allow serial-based devices to experience USB advantages without major changes. This work describes the specification and development of such converter and it is also a useful guide for implementing other USB devices. The converter specification was based on Engetron UPS’ serial communication requirements and its implementation uses a Cypress microcontroller with USB support.
  • Item
    An early warning system for space-time cluster detection.
    (2003) Assunção, Renato Martins; Tavares, Andréa Iabrudi; Kulldorff, Martin
    A new topic of great relevance and concern has been the design of efficient early warning systems to detect as soon as possible the emergence of spatial clusters. In particular, many applications involving spatial events recorded as they occur sequentially in time require this kind of analysis, such as fire spots in forest areas as in the Amazon, crimes occurring in urban centers, locations of new disease cases to prevent epidemics, etc. We propose a statistical method to test for the presence of space-time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. It is based on scanning the three dimensional space with a score test statistic under the null hypothesis that the point process is an inhomogeneous Poisson point process with space and time separable first order intensity. We discuss an algorithm to carry out the test and we illustrate our method with space-time crime data from Belo Horizonte, a large Brazilian city.
  • Item
    Balancing coordination and synchronization cost in cooperative situated multi-agent systems with imperfect communication.
    (2004) Tavares, Andréa Iabrudi; Campos, Mário Fernando Montenegro
    We propose a new Markov team decision model to the decentralized control of cooperative multi-agent systems with imperfect communication. Informational classes capture system’s communication semantics and uncertainties about transmitted information and stochastic transmission models, including delayed and lost messages, summarize characteristics of communication devices and protocols. This model provides a quantitative solution to the problem of balancing coordination and synchronization cost in cooperative domains, but its exact solution is computationally infeasible.We propose a generic heuristic approach, based on a off-line centralized team plan. Decentralized decision-making relies on Bayesian dynamic system estimators and decision-theoretic policy generators. These generators use system estimators to express agent’s uncertaintyabout system state and also to quantify expected effects of communication on local and external knowledge. Probabilities of external team behavior, a byproduct of policy generators, are used into system estimators to infer state transition. Experimental results concerning two previously proposed multi-agent tasks are presented, including limited communication range and reliability.
  • Item
    Efficient allocation of verification resources using revision history information.
    (2008) Nacif, José Augusto Miranda; Silva, Thiago; Tavares, Andréa Iabrudi; Fernandes, Antônio Otávio; Coelho Júnior, Claudionor José Nunes
    Verifying large industrial designs is getting harder each day. he current verification methodologies are not able to guarantee bug free designs. Some recurrent questions during a design verification are: Which modules are most likely to contain undetected bugs? In wich modules the verification team should concentrate their effort? This information is very useful, because it is better to start verifying the most bug-prone modules. In this work we present a novel approach to answer these questions. In order to identify these bug rone modules the revision history of the design is used. Using information of an academic experiment, we demonstrate that there is a close between bugs/changes history and future bugs. Our results show that allocating modules for verification based on bugs/changes leaded to the coverage of 91.67% of future bugs, while random based strategy covered only 37.5% of the future work mainly focused in software engineering techniques to bugs.
  • Item
    Projeto/Reprojeto de bancos de dados relacionais : a ferramenta DB-Tool.
    (1997) Ferreira, Anderson Almeida; Laender, Alberto Henrique Frade; Silva, Altigran Soares da
    This paper describes a tool that supp orts the design and redesign of relational databases The tool produces optimized relational representations of entity relationship ER schemas and is implemented using Informix as its target database management system DBMS The tool operates in two phases In the first phase it receives as input an ER schema and generates a list of commands to implement the corresponding Informix schema In the second phase it receives a list of redesign commands specifying changes to the ER schema and generates a redesign plan to reestructure the database accordingly An example illustrates the use of the tool.
  • Item
    SyGAR – A synthetic data generator for evaluating name disambiguation methods.
    (2009) Ferreira, Anderson Almeida; Gonçalves, Marcos André; Almeida, Jussara Marques de; Laender, Alberto Henrique Frade; Veloso, Adriano Alonso
    Name ambiguity in the context of bibliographic citations is one of the hardest problems currently faced by the digital library community. Several methods have been proposed in the literature, but none of them provides the perfect solution for the problem. More importantly, basically all of these methods were tested in limited and restricted scenarios , which raises concerns about their practical applicability. In this work, we deal with these limitation s by proposing a synthetic generator of ambiguous authors hip records called SyGAR . The generator was validated against a gold standard collection of d is ambiguated records , and aplied to evaluate three d is ambiguation method s in a relevant scenario.
  • Item
    Syntactic similarity of web documents.
    (2003) Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio
    This paper presents and compares two methods for evaluating the syntactic similarity between documents. The first method uses the Patricia tree, constructed from the original document, and the similarity is computed searching the text of each candidate document in the tree. The second method uses shingles concept to obtain the similarity measure for every document pairs, and each shingle from the original document is inserted in a hash table, where shingles of each candidate document are searched. Given an original doc-ument and some candidates, two methods find documents that have some similarity relationship with the original doc-ument. Experimental results were obtained by using a pla-giarized documents generator system, from 900 documents collected from the Web. Considering the arithmetic ave rage of the absolute differences between the expected and ob-tained similarity, the algorithm that uses shingles obtained a performance of 4,13 % and the algorithm that uses Patricia tree a performance 7.50%
  • Item
    Geração de impressão digital para recuperação de documentos similares na web
    (2004) Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio
    This paper presents a mechanism for the generation of the “finger-print” of a Web document. This mechanism is part of a system for detecting and retrieving documents from the Web with a similarity relation to a suspicious do-cument. The process is composed of three stages: a) generation of a fingerprint of the suspicious document, b) gathering candidate documents from the Web and c) comparison of each candidate document and the suspicious document. In the first stage, the fingerprint of the suspicious document is used as its identifica-tion. The fingerprint is composed of representative sentences of the document. In the second stage, the sentences composing the fingerprint are used as queries submitted to a search engine. The documents identified by the URLs returned from the search engine are collected to form a set of similarity candidate do-cuments. In the third stage, the candidate documents are “in-place” compared to the suspicious document. The focus of this work is on the generation of the fingerprint of the suspicious document. Experiments were performed using a collection of plagiarized documents constructed specially for this work. For the best fingerprint evaluated, on average87.06%of the source documents used in the composition of the plagiarized document were retrieved from the Web.
  • Item
    Um novo retrato da web brasileira.
    (2005) Modesto, Marco; Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio; Castilho, Carlos; Yates, Ricardo Baeza
    O objetivo deste artigo ´e avaliar características quantitativas e qualitativas da Web brasileira, confrontando estimativas atuais com estimativas obtidas há cinco anos. Grande parte do conteúdo Web´ e dinâmico e volátil, o que inviabiliza a sua coleta na totalidade. Logo, o processo de avaliação foi realizado sobre uma amostra da Web brasileira, coletada em marco de 2005. Os resultados são estimados de forma consistente, usando uma metodologia eficaz, j´a utilizada em trabalhos similares com Webs de outros países. Dentre os principais aspectos observados neste trabalho estão a distribuição dos idiomas das paginas, o uso de ferramentas abertas versus proprietárias para geração de paginas dinâmicas, a distribuição dos formatos de documentos, a distribuição de tipos de domínios e a distribuição dos links a Web sites externos.
  • Item
    WIM : an information mining model for the web.
    (2005) Yates, Ricardo Baeza; Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio
    This paper presents a model to mine information in ap-plications involving Web and graph analysis, referred to as WIM – Web Information Mining – model. We demonstrate the model characteristics using a Web warehouse. The Web data in the warehouse is modeled as a graph, where nodes represent Web pages and edges represent hyperlinks. In the model, objects are always sets of nodes and belong to one class. We have physical objects containing attributes di-rectly obtained from Web pages and links, as the title of a Web page or the start and end pages of a link. Logical ob-jects can be created by performing predefined operations on any existing object. In this paper we present the model components, propose a set of eleven operators and give ex-amples of views. A view is a sequence of operations on objects, and it represents a way to mine information in the graph. As practical examples, we present views for cluster-ing nodes and for identifying related item sets.
  • Item
    The evolution of web content and search engines.
    (2006) Yates, Ricardo Baeza; Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio
    The evolution of web content and search engines The Web grows at a fast pace and little is known about how new content is generated. The objective of this paper is to study the dynamics of content evolution in the Web, giv-ing answers to questions like: How much new content has evolved from the Web old content? How much of the Web content is biased by ranking algorithms of search engines? We used four snapshots of the Chilean Web containing documents of all the Chilean primary domains, crawled in four distinct periods of time. If a page in a newer snapshot has content of a page in an older snapshot, we say that the source is a parent of the new page. Our hypothesis is that when pages have parents, in a portion of pages there was a query that related the parents and made possible the creation of the new page. Thus, part of the Web content is biased by the ranking function of search engines. We also de¯ne a genealogical tree for the Web, where many pages are new and do not have parents and others have one or more parents. We present the Chilean Web genealogical tree and study its components. To the best of our knowledge this is the ¯rst paper that studies how old content is used to create new content, relating a search engine ranking algorithm with the creation of new pages.
  • Item
    Genealogical trees on the web : a search engine user perspective.
    (2008) Yates, Ricardo Baeza; Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio
    This paper presents an extensive study about the evolution of textual content on the Web, which shows how some new pages are created from scratch while others are created using already existing content. We show that a significant fraction of the Web is a byproduct of the latter case. We introduce the concept of Web genealogical tree, in which every page in a Web snapshot is classified into a component. We study in detail these components, characterizing the copies and identifying the relation between a source of content and a search engine, by comparing page relevance measures, documents returned by real queries performed in the past, and click-through data. We observe that sources of copies are more frequently returned by queries and more clicked than other documents.
  • Item
    Entendendo a twitteresfera brasileira.
    (2011) Souza, Fabrício Benevenuto de; Silveira, Diego; Bombonato, Leonardo; Fortes, Reinaldo Silva; Pereira Junior, Álvaro Rodrigues
    O Twitter vem constantemente crescendo como um importante sistema onde usuários discutem sobre tudo, expressando opiniões, visão política, orientação sexual e ate mesmo humor, como felicidade ou tristeza. Redes sociais são apontadas como locais onde usuários influenciam e são influenciados por outros sendo, portanto, ambientes perfeitos para a realização de marketing boca-a-boca, propagandas e campanhas políticas. Com o intuito de oferecer entendimento sobre o uso do Twitter no Brasil, este trabalho prove uma ampla caracterização dos usuários brasileiros no Twitter e do conteúdo postado por esses usuários. Nos correlacionamos dados demográficos brasileiros com dados da localização dos usuários do Twitter para mostrar que alguns estados brasileiros estão subestimados nesse sistema. Alem disso, n´os caracterizamos os diferentes padrões linguísticos adotados, analisamos as URLs mais propagadas, e identificamos os usuários brasileiros mais influentes no Twitter em cada região brasileira.
  • Item
    IDEAL-TRAFFIC : a self-adaptive management framework for building monitoring applications with support to network topology changes.
    (2012) Silva, Saul Emanuel Delabrida; Oliveira, Ricardo Augusto Rabelo; Pereira Junior, Álvaro Rodrigues
  • Item
    System-level partitioning with uncertainty.
    (1999) Albuquerque, Jones; Coelho Júnior, Claudionor José Nunes; Cavalcanti, Carlos Frederico Marcelo da Cunha; Silva Júnior, Diógenes Cecílio da; Fernandes, Antônio Otávio
    Several models and algorithms have been proposed in the past to generate HW/SW components for system-level designs. However, they were focused on a single designer who had a throughout knowledge of the design. In other words, the decision trade-offs were simplified to a stand-alone developer who did not have to consider individual skills, concurrent development for portions of the design, risk analysis for time-to-market development, nor team load and assignment. In this paper, we propose a design management approach associated with a partitioning methodology to deal with the concurrent design problems of system-level specifications. This methodology allows one to incorporate the uncertainties related to development at the very early stages of the design, and to follow up during the development of a final product.
  • Item
    An architectural framework for providing QoS in IP differentiated services networks.
    (2001) Trimintzios, Panos; Andrikopoulos, Ilias; Pavlou, George; Cavalcanti, Carlos Frederico Marcelo da Cunha; Georgatsos, Panos; Griffin, David; Jacquenet, C.; Goderis, D.; T'Joens, Y.; Georgiadis, Leonidas; Egan, R.; Memenios, G.
    As the Internet evolves, a key consideration is support for services with guaranteed quality of service (QoS). The proposed differentiated services (DiffServ) framework, which supports aggregate traffic classes, is seen as the key technology to achieve this. DiffServ currently concentrates on control/data plane mechanisms to support QoS but also recognises the need for management plane aspects through the bandwidth broker (BB). In this paper we propose a model and architectural framework for supporting end-to-end QoS in the Internet through a combination of both management and control/data plane aspects. Within the network we consider control mechanisms for traffic engineering (TE) based both on explicitly routed paths and on pure node-by-node layer 3 routing. Management aspects include customer interfacing for service level specification (SLS) negotiation, network dimensioning, traffic forecasting and dynamic resource and routing management. All these are policy-driven in order to allow for the specification of high-level management directives. Many of the functional blocks of our architectural model are also features of BBs, the main difference being that a BB is seen as driven purely by customer requests whereas, in our approach, TE functions are continually aiming at optimising the network configuration and its performance. As such, we substantiate the notion of the BB and propose an integrated management and control architecture that will allow providers to offer both qualitative and quantitative QoS-based services while optimising the use of underlying network resources
  • Item
    Simulação de roteamento em redes IP com QoS.
    (2004) Cavalcanti, Carlos Frederico Marcelo da Cunha; Nascimento, Ricardo Alonso dos Santos; Borges, Daniel Prata Leite
    The costumer demands for multimedia and realtime applications are rising up the Internet into a new level of guarantees of QoS. This article explains the key steps to provide an effective schema of how to implement QoS routing based on Diffserv and MPLS and explains how to simulate it using the classical NS-2 simulator.
  • Item
    Uma metodologia heurística baseada em grasp, VND e VNS para a resiolução do problema de dimensionamento em redes IP.
    (2004) Cavalcanti, Carlos Frederico Marcelo da Cunha; Souza, Marcone Jamilson Freitas; Souza, Fernanda Sumika Hojo de; Coelho, Viviane de Souza
    O presente trabalho apresenta uma proposta de formulação e implementação de algoritmos baseados nas técnicas de otimização GRASP (Greed Randomized Search Procedure), VND (Variable Neighborhood Descent) e VNS (Variable Neighborhood Search) para satisfazer a nova geração da Internet, que implementa Qualidade de Serviço e Engenharia de Tráfego. Este contexto surgiu da crescente expansão da Internet e da necessidade de satisfazer a novos requisitos impostos por aplicações mais complexas, tais como transmissões em tempo real, exigindo que caminhos explícitos entre um nó de entrada da rede e um ou mais nós de saída sejam computados. Esta tarefa é também chamada de dimensionamento da rede. Resultados computacionais são apresentados, comprovando que é possível prover uma melhora no dimensionamento da rede através das técnicas propostas.
  • Item
    Engineering the multi-service internet : MPLS and IP-based techniques.
    (2001) Trimintzios, Panos; Georgiadis, Leonidas; Pavlou, George; Griffin, David; Cavalcanti, Carlos Frederico Marcelo da Cunha; Georgatsos, Panos; Jacquenet, C.
    IP Differentiated Services (DiffServ) is seen as the framework to support quality of service (QoS) in the Internet in a scalable fashion, turning it to a global multiservice network. In this context, integrated service/network management and traffic control mechanisms are of paramount importance for service provisioning and network operation, aiming to satisfy the QoS requirements of contacted services while optimizing the use of underlying network resources. In this paper, after briefly introducing an architectural framework for integrated service/network management and control, we concentrate in its traffic engineering aspects comparing and contrasting two different approaches: MPLS-based explicit routed paths and IP-based hop-by-hop routing. We consider relatively longterm network dimensioning based on the requirements of contracted services and subsequent dynamic route and resource management that react in shorter time scales to statistical traffic fluctuations and varying network conditions.