Since its creation, the Web has been a main object of research for information management, which has been primarily studied using classical paradigms. However, since the early 2000s, we are witnessing drastic changes in the area of Web data management. If we had to summarize them in one sentence, it would be: real distribution of big data.
In this new scenario, capturing the meaning of heterogeneous data and developing tools for its processing play a crucial role. The Semantic Web is an enormous initiative led by the World Wide Web Consortium whose main objective is to achieve these goals, thus transforming the current Web of documents into a Web of data, where human users and computer applications can take a better advantage of the massive amount of information stored on it. Some key steps have been made to achieve these goals. However, we are still far from having techniques that take full advantage of the semantics and the logic behind Web data, once its structure, scale and distribution –altogether– are considered as a full-fledged phenomenon.
The main goal of the Center for Semantic Web Research is to study how to effectively extract semantic data from the Web, and to develop the basic tools for such effective extraction. This is an initiative that brings together professors, researchers and students from Pontifical Catholic University of Chile, University of Chile and University of Talca, and which is funded by the Iniciativa Científica Milenio.
Wikidata is a new knowledge-base overseen by the Wikimedia foundation and collaboratively edited by a community of thousands of users. The goal of Wikidata is to provide a common interoperable source of factual information for Wikimedia projects, foremost of which is Wikipedia. In this talk, we present the result of our experiments that compare experimentally the efficiency of various database engines for the purposes of querying the Wikidata knowledge-base, which can be conceptualized as a directed edge-labelled graph where edges can be annotated with meta-information called qualifiers. We take two popular SPARQL databases (Virtuoso, Blazegraph), a popular relational database (PostgreSQL), and a popular graph database (Neo4J) for comparison and discuss various options as to how Wikidata can b e represented in the models of each engine. We design a set of experiments to test the relative query performance of these representations in the context of their respective engines. We first execute a large set of atomic lookups to establish a baseline performance for each test setting, and subsequently perform experiments on instances of more complex graph patterns based on real-world examples. We conclude with a summary of the strengths and limitations of the engines observed. The talk is bases on a paper made with Aidan Hogan, Christian Riveros, Carlos Rojas and Enzo Zerega, that will be presented in ISWC 2016.
Regular word transductions extend the robust family of regular word languages, preserving many of its characterisations and algorithmic properties. Finite state transducers are a standard model for representing word transductions, and can be seen as automata extended with outputs. However, differently from automata, two-way transducers are strictly more expressive than one-way transducers. It has been recently shown how to decide if a two-way functional transducer has an equivalent one-way transducer, and the complexity of the algorithm is non-elementary. We will present an alternative and simpler characterisation that is decidable in EXPSPACE. In the positive case, the characterisation can be used to construct an equivalent one-way transducer of (worst-case optimal) doubly exponential size. We will finally discuss a generalisation of the result that characterises k-pass sweeping definability of transductions, and relate this to minimisation problems for registers of streaming transducers.
Former Undergraduate student and current PhD student Pablo Muñoz under the supervision of Pablo Barceló obtained the "Vienna Center for Logic and Algorithms Outstanding Undergraduate Research Award". This award is given by one of the most important institutions in Computer Science in Europe and Pablo has been invited to present his research work at the center this year.
The Council of Professors and Heads of Computing (CPHC), in conjunction with the British Computer Society (BCS) and the BCS Academy of Computing has selected Dr. Juan Reutter’s dissertation as the winner of the BCS Distinguished Dissertation Award, that annually selects for publication the best British PhD/DPhil dissertation in computer science.
El investigador del Núcleo Milenio CIWS fue destacado en la Revista Beauchef Magazine por las iniciativas que lidera fomentando el pensamiento computacional. Puedes leer el reportaje acá: Descomponer Problemas para Construir el Futuro
La cadena de Televisión China CCTV América realizó un reportaje sobre la manera en que Chile enfrenta los terremotos. Bárbara Poblete y Jazmine Maldonado fueron destacadas en este reportaje por el sistema de detección de sismos a través de Twitter. Puedes ver el reportaje en el siguiente link: Chile leading the way to decrease earthquake devastation