480 research outputs found

    Analysis of the Web Graph Aggregated by Host and Pay-Level Domain

    Full text link
    In this paper the web is analyzed as a graph aggregated by host and pay-level domain (PLD). The web graph datasets, publicly available, have been released by the Common Crawl Foundation and are based on a web crawl performed during the period May-June-July 2017. The host graph has \sim1.3 billion nodes and \sim5.3 billion arcs. The PLD graph has \sim91 million nodes and \sim1.1 billion arcs. We study the distributions of degree and sizes of strongly/weakly connected components (SCC/WCC) focusing on power laws detection using statistical methods. The statistical plausibility of the power law model is compared with that of several alternative distributions. While there is no evidence of power law tails on host level, they emerge on PLD aggregation for indegree, SCC and WCC size distributions. Finally, we analyze distance-related features by studying the cumulative distributions of the shortest path lengths, and give an estimation of the diameters of the graphs

    CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

    Full text link
    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    Micro-CernVM: Slashing the Cost of Building and Deploying Virtual Machines

    Full text link
    The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    Opportunities for Nuclear Astrophysics at FRANZ

    Full text link
    The "Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum" (FRANZ), which is currently under development, will be the strongest neutron source in the astrophysically interesting energy region in the world. It will be about three orders of magnitude more intense than the well-established neutron source at the Research Center Karlsruhe (FZK)

    Verification and Validation of Semantic Annotations

    Full text link
    In this paper, we propose a framework to perform verification and validation of semantically annotated data. The annotations, extracted from websites, are verified against the schema.org vocabulary and Domain Specifications to ensure the syntactic correctness and completeness of the annotations. The Domain Specifications allow checking the compliance of annotations against corresponding domain-specific constraints. The validation mechanism will detect errors and inconsistencies between the content of the analyzed schema.org annotations and the content of the web pages where the annotations were found.Comment: Accepted for the A.P. Ershov Informatics Conference 2019(the PSI Conference Series, 12th edition) proceedin

    Prototype Testing of the Frankfurt Gabor Lens at HOSTI

    Get PDF

    The s Process: Nuclear Physics, Stellar Models, Observations

    Full text link
    Nucleosynthesis in the s process takes place in the He burning layers of low mass AGB stars and during the He and C burning phases of massive stars. The s process contributes about half of the element abundances between Cu and Bi in solar system material. Depending on stellar mass and metallicity the resulting s-abundance patterns exhibit characteristic features, which provide comprehensive information for our understanding of the stellar life cycle and for the chemical evolution of galaxies. The rapidly growing body of detailed abundance observations, in particular for AGB and post-AGB stars, for objects in binary systems, and for the very faint metal-poor population represents exciting challenges and constraints for stellar model calculations. Based on updated and improved nuclear physics data for the s-process reaction network, current models are aiming at ab initio solution for the stellar physics related to convection and mixing processes. Progress in the intimately related areas of observations, nuclear and atomic physics, and stellar modeling is reviewed and the corresponding interplay is illustrated by the general abundance patterns of the elements beyond iron and by the effect of sensitive branching points along the s-process path. The strong variations of the s-process efficiency with metallicity bear also interesting consequences for Galactic chemical evolution.Comment: 53 pages, 20 figures, 3 tables; Reviews of Modern Physics, accepte

    Data Integration for Open Data on the Web

    Get PDF
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help
    corecore