Since its creation, the Web has been a main object of research for information management, which has been primarily studied using classical paradigms. However, since the early 2000s, we are witnessing drastic changes in the area of Web data management. If we had to summarize them in one sentence, it would be: real distribution of big data.
In this new scenario, capturing the meaning of heterogeneous data and developing tools for its processing play a crucial role. The Semantic Web is an enormous initiative led by the World Wide Web Consortium whose main objective is to achieve these goals, thus transforming the current Web of documents into a Web of data, where human users and computer applications can take a better advantage of the massive amount of information stored on it. Some key steps have been made to achieve these goals. However, we are still far from having techniques that take full advantage of the semantics and the logic behind Web data, once its structure, scale and distribution –altogether– are considered as a full-fledged phenomenon.
The main goal of the Center for Semantic Web Research is to study how to effectively extract semantic data from the Web, and to develop the basic tools for such effective extraction. This is an initiative that brings together professors, researchers and students from Pontifical Catholic University of Chile, University of Chile and University of Talca, and which is funded by the Iniciativa Científica Milenio.
The ``Peer Review'' system has been used in its current form for roughly one century. It has recently started to show its limits, from systemic failure such as the lack of repeatibility of scientific publication in various fields (Economics, Psychology, etc.), to isolated personal failures such as scientific frauds, and more generally to dangerous trends in the evaluation of academic careers, departments and universities, forming long term incentives potentially harmful to Human ingenuity and ability to progress scientifically. We postulate that such failures are a symptom of the growth of Academia, and that the pressure on the member of academia is stretching the current implementation of the peer review system beyond its limits. Accordingly, we list modern technologies, currently unused, which could be used for a new implementation of the peer-review process, we discuss the potential cost and advantages of each separately, and which ones and how they can be combined to form a soft transition to a new healthier peer review system. Among those techniques, we focus in particular on the concept of "Boot Strapping Databases", where all agents are taking turn at being author, referee and reader, and where the quality of one's good work as a referee at each tier is evaluated, and used as a condition to be allowed to participate as an author and as a reader. Such systems have been tested to correct student essays in "Calibrated Peer Review", and to incentivate the collaborative development of a database of teaching material at the University of Waterloo. We will open the discussion about how to introduce to manage the peer-reviewing of academic research via services such as the publishers Arxiv and the Cryptology ePrint Archive (http://eprint.iacr.org/).
For data intensive analytic challenges, memory bandwidth, not processor speed, is the primary performance limitation. Graphics processing units (GPUs) provide superior bandwidth to main memory and can deliver significant speedups over CPUs. However, it is not trivial to develop GPU accelerated graph algorithms. In contrast, to scale applications onto multicore, parallel architectures it requires significant expertise, including intimate knowledge of the CPU and GPU memory systems, and detailed knowledge of a GPU programming framework such as OpenCL or CUDA. To enable analytic experts to implement complex graph applications that efficiently run on GPUs we have developed a domain-specific language called DASL and a corresponding execution system that executes DASL programs by automatic translation into program code that is optimized for GPUs. In the talk I will introduce DASL based on a few simple examples, provide a technical overview on Blazegraph DASL, and discuss some aspects of the approach in more depth, including the translation of DASL statements and the integration of user defined functions.
Former Undergraduate student and current PhD student Pablo Muñoz under the supervision of Pablo Barceló obtained the "Vienna Center for Logic and Algorithms Outstanding Undergraduate Research Award". This award is given by one of the most important institutions in Computer Science in Europe and Pablo has been invited to present his research work at the center this year.
The Council of Professors and Heads of Computing (CPHC), in conjunction with the British Computer Society (BCS) and the BCS Academy of Computing has selected Dr. Juan Reutter’s dissertation as the winner of the BCS Distinguished Dissertation Award, that annually selects for publication the best British PhD/DPhil dissertation in computer science.
La investigadora del Núcleo Milenio CIWS, Barbara Poblete comentó en el Diario El Mercurio el trabajo de investigadores australianos que comprueba científicamente que la cantidad de mensajes o fotos que envía la gente irían en directa relación con la intensidad del daño que pueden producir huracanes o terremotos. Para leer la noticia, haz click en el siguiente enlace.
La investigadora CIWS y académica del Departamento de Ciencias de la Computación de la Universidad de Chile, Barbara Poblete fue destacada en el Diario El Mercurio por su trabajo junto con el Centro Sismólogico Nacional (CSN). Ambas instituciones trabajan en una Plataforma Web, en la que a través de Twitter se puedan detectar intensidad y epicentro de un sismo. Para leer la noticia, haz click en el siguiente enlace goo.gl/fYcosO