132 research outputs found

    Re-Source, une archive en temps réel pour la publication et la production

    Get PDF
    National audienceLafayette anticipation est une fondation d'art contemporain qui met l'accent sur la production et l'accompagnement des artistes. Le projet Lafayette ReSource , une archive sémantique en temps réel, a pour vocation de permettre le suivi de l'ensemble des activités qui contribuent à donner naissance à une oeuvre. Elle entend ainsi faciliter la compréhension du travail réalisé par la fondation en interne, donner au public des prise pour apprécier les oeuvres dans leur déploiement (l'art en train de se faire et non l'art déjà fait) et livrer du même coup aux artistes un nouveau matériau pour la création

    L'avenir du web au prisme de la ressource

    Get PDF
    Chapitre 10 : L'avenir du web au prisme de la ressourceNational audienceDe plus en plus fréquemment, le web s'intercale entre nous et le monde. Le web des documents et des données augmente nos perceptions de la réalité quand dans le même temps le web d'applications et de services accroît l'emprise que nous exerçons sur elle en multipliant les tâches que nous pouvons accomplir. Devenu incontournable dans nos activités quotidiennes, il est également difficilement gérable. Sur le web, une ressource peut être n'importe quoi et, à mesure que le réseau s'étend, tout, dans notre environnement, est susceptible de se muer en ressource. Certes, on parle maintenant de " réalité augmentée " par le web mais, à mesure que la toile se déploie, il faut également noter à quel point la réalité vient l'augmenter elle-même, en raison de la quantité et de la diversité de ressources identifiées en son sein. Cette question était déjà latente avec le " web documentaire " - désignation sous laquelle il fut volontiers présenté dans les années 1990. Le modèle documentaire initialement mobilisé pour en rendre compte, où chaque page HTML accédée était un document téléchargé par un programme client (le navigateur) auprès d'un programme serveur (serveur web), est rapidement apparu beaucoup trop étroit pour englober des scénarios de cas particuliers tels que les pages dynamiques, la négociation de contenu, les applications spécifiques comme un annuaire, etc. L'aspect calculatoire, qui parut constituer une exception à la métaphore de la " bibliothèque universelle du web ", devint en fin de compte la règle. En outre, le web sémantique et ses métadonnées, portant potentiellement sur tout ce qui est identifiable, soulevèrent de nouvelles interrogations que l'on peut résumer à l'aide du raccourci suivant : comment est-il possible d'identifier sur le web des choses résidant en dehors du web ? L'architecture REST suggère un élément de réponse, présent dès la naissance du web, en fournissant une grille lecture a posteriori le définissant comme une " application centrée (ou orientée) ressources ". Les évolutions récentes du web (AJAX, HTML5, Linked Data, etc.) et les évolutions envisagées pour l'avenir (web des objets, web ubiquitaire, etc.) imposent de jeter un regard neuf sur son architecture. Après examen, la ressource pourrait bien constituer le concept unificateur de toutes ses facettes et la pierre angulaire susceptible de rendre compte de sa cohérence architecturale. Il nous faut donc la redéfinir au plus proche de son évolution dans les standards, au fil du temps mais aussi des usages. De cette définition et caractérisation précise dépendra notre capacité à tirer pleinement parti des ressources en les identifiant, publiant, cherchant, filtrant, combinant, adaptant, décrivant, etc

    Repeated injections of 131I-rituximab show patient-specific stable biodistribution and tissue kinetics

    Get PDF
    Purpose: It is generally assumed that the biodistribution and pharmacokinetics of radiolabelled antibodies remain similar between dosimetric and therapeutic injections in radioimmunotherapy. However, circulation half-lives of unlabelled rituximab have been reported to increase progressively after the weekly injections of standard therapy doses. The aim of this study was to evaluate the evolution of the pharmacokinetics of repeated 131I-rituximab injections during treatment with unlabelled rituximab in patients with non-Hodgkin's lymphoma (NHL). Methods: Patients received standard weekly therapy with rituximab (375mg/m2) for 4 weeks and a fifth injection at 7 or 8 weeks. Each patient had three additional injections of 185MBq 131I-rituximab in either treatment weeks 1, 3 and 7 (two patients) or weeks 2, 4 and 8 (two patients). The 12 radiolabelled antibody injections were followed by three whole-body (WB) scintigraphic studies during 1 week and blood sampling on the same occasions. Additional WB scans were performed after 2 and 4 weeks post 131I-rituximab injection prior to the second and third injections, respectively. Results: A single exponential radioactivity decrease for WB, liver, spleen, kidneys and heart was observed. Biodistribution and half-lives were patient specific, and without significant change after the second or third injection compared with the first one. Blood T1/2β, calculated from the sequential blood samples and fitted to a bi-exponential curve, was similar to the T1/2 of heart and liver but shorter than that of WB and kidneys. Effective radiation dose calculated from attenuation-corrected WB scans and blood using Mirdose3.1 was 0.53+0.05mSv/MBq (range 0.48-0.59mSv/MBq). Radiation dose was highest for spleen and kidneys, followed by heart and liver. Conclusion: These results show that the biodistribution and tissue kinetics of 131I-rituximab, while specific to each patient, remained constant during unlabelled antibody therapy. RIT radiation doses can therefore be reliably extrapolated from a preceding dosimetry stud

    Treat Different Negatives Differently: Enriching Loss Functions with Domain and Range Constraints for Link Prediction

    Full text link
    Knowledge graph embedding models (KGEMs) are used for various tasks related to knowledge graphs (KGs), including link prediction. They are trained with loss functions that are computed considering a batch of scored triples and their corresponding labels. Traditional approaches consider the label of a triple to be either true or false. However, recent works suggest that all negative triples should not be valued equally. In line with this recent assumption, we posit that negative triples that are semantically valid w.r.t. domain and range constraints might be high-quality negative triples. As such, loss functions should treat them differently from semantically invalid negative ones. To this aim, we propose semantic-driven versions for the three main loss functions for link prediction. In an extensive and controlled experimental setting, we show that the proposed loss functions systematically provide satisfying results on three public benchmark KGs underpinned with different schemas, which demonstrates both the generality and superiority of our proposed approach. In fact, the proposed loss functions do (1) lead to better MRR and Hits@10 values, (2) drive KGEMs towards better semantic awareness as measured by the Sem@K metric. This highlights that semantic information globally improves KGEMs, and thus should be incorporated into loss functions. Domains and ranges of relations being largely available in schema-defined KGs, this makes our approach both beneficial and widely usable in practice

    Sem@KK: Is my knowledge graph embedding model semantic-aware?

    Full text link
    Using knowledge graph embedding models (KGEMs) is a popular approach for predicting links in knowledge graphs (KGs). Traditionally, the performance of KGEMs for link prediction is assessed using rank-based metrics, which evaluate their ability to give high scores to ground-truth entities. However, the literature claims that the KGEM evaluation procedure would benefit from adding supplementary dimensions to assess. That is why, in this paper, we extend our previously introduced metric Sem@K that measures the capability of models to predict valid entities w.r.t. domain and range constraints. In particular, we consider a broad range of KGs and take their respective characteristics into account to propose different versions of Sem@K. We also perform an extensive study to qualify the abilities of KGEMs as measured by our metric. Our experiments show that Sem@K provides a new perspective on KGEM quality. Its joint analysis with rank-based metrics offers different conclusions on the predictive power of models. Regarding Sem@K, some KGEMs are inherently better than others, but this semantic superiority is not indicative of their performance w.r.t. rank-based metrics. In this work, we generalize conclusions about the relative performance of KGEMs w.r.t. rank-based and semantic-oriented metrics at the level of families of models. The joint analysis of the aforementioned metrics gives more insight into the peculiarities of each model. This work paves the way for a more comprehensive evaluation of KGEM adequacy for specific downstream tasks

    PyGraft: Configurable Generation of Synthetic Schemas and Knowledge Graphs at Your Fingertips

    Full text link
    Knowledge graphs (KGs) have emerged as a prominent data representation and management paradigm. Being usually underpinned by a schema (e.g., an ontology), KGs capture not only factual information but also contextual knowledge. In some tasks, a few KGs established themselves as standard benchmarks. However, recent works outline that relying on a limited collection of datasets is not sufficient to assess the generalization capability of an approach. In some data-sensitive fields such as education or medicine, access to public datasets is even more limited. To remedy the aforementioned issues, we release PyGraft, a Python-based tool that generates highly customized, domain-agnostic schemas and KGs. The synthesized schemas encompass various RDFS and OWL constructs, while the synthesized KGs emulate the characteristics and scale of real-world KGs. Logical consistency of the generated resources is ultimately ensured by running a description logic (DL) reasoner. By providing a way of generating both a schema and KG in a single pipeline, PyGraft's aim is to empower the generation of a more diverse array of KGs for benchmarking novel approaches in areas such as graph-based machine learning (ML), or more generally KG processing. In graph-based ML in particular, this should foster a more holistic evaluation of model performance and generalization capability, thereby going beyond the limited collection of available benchmarks. PyGraft is available at: https://github.com/nicolas-hbt/pygraft.Comment: Accepted in ESWC 202

    Schema First! Learn Versatile Knowledge Graph Embeddings by Capturing Semantics with MASCHInE

    Full text link
    Knowledge graph embedding models (KGEMs) have gained considerable traction in recent years. These models learn a vector representation of knowledge graph entities and relations, a.k.a. knowledge graph embeddings (KGEs). Learning versatile KGEs is desirable as it makes them useful for a broad range of tasks. However, KGEMs are usually trained for a specific task, which makes their embeddings task-dependent. In parallel, the widespread assumption that KGEMs actually create a semantic representation of the underlying entities and relations (e.g., project similar entities closer than dissimilar ones) has been challenged. In this work, we design heuristics for generating protographs -- small, modified versions of a KG that leverage schema-based information. The learnt protograph-based embeddings are meant to encapsulate the semantics of a KG, and can be leveraged in learning KGEs that, in turn, also better capture semantics. Extensive experiments on various evaluation benchmarks demonstrate the soundness of this approach, which we call Modular and Agnostic SCHema-based Integration of protograph Embeddings (MASCHInE). In particular, MASCHInE helps produce more versatile KGEs that yield substantially better performance for entity clustering and node classification tasks. For link prediction, using MASCHInE has little impact on rank-based performance but increases the number of semantically valid predictions

    CoReWeb: From Linked Documentary Resources to Linked Computational Resources

    Get PDF
    urn:nbn:de:0074-859-9, http://ceur-ws.org/Vol-859/paper6.pdfInternational audienceThe naive documentary model behind the Web (a single HTML Web page retrieved by a client from a server) soon appeared too narrow to encompass all cto account for dynamic pages, content negotiation, Web applications, etc. The Semantic Web raised another issue: how could we refer to things outside of the Web? Roy Fielding's REST style of architecture solved both problems by providing the Web its post-hoc "theory", making it a resource-oriented application. Recent evolutions (AJAX, HTML5, Linked Data, etc.) and envisioned evolutions (Web of devices, ubiquitous Web, etc.) require a new take on this style of architecture. At the core of the Web architecture and acting as a unifying concept beneath all its facets we find the notion of resource. The introduction of resources was very much needed for the Web to remain coherent; we now have to thoroughly redefine them to espouse its evolutions through time and usages. From the definition and the characterization of resources depends our abilities to efficiently leverage them: identify, publish, find, filter, combine, customize them, augment their affordance, etc

    Mitochondrial DNA parameters in blood of infants receiving lopinavir/ritonavir or lamivudine prophylaxis to prevent breastfeeding transmission of HIV-1

    Get PDF
    Children who are human immunodeficiency virus (HIV)-exposed but uninfected (CHEU) accumulate maternal HIV and antiretroviral exposures through pregnancy, postnatal prophylaxis, and breastfeeding. Here, we compared the dynamics of mitochondrial DNA (mtDNA) parameters in African breastfed CHEU receiving lopinavir/ritonavir (LPV/r) or lamivudine (3TC) pre-exposure prophylaxis during the first year of life. The number of mtDNA copies per cell (MCN) and the proportion of deleted mtDNA (MDD) were assessed at day 7 and at week 50 post-delivery (PrEP group). mtDNA depletion was defined as a 50% or more decrease from the initial value, and mtDNA deletions was the detection of mtDNA molecules with large DNA fragment loss. We also performed a sub-analysis with CHEU who did not receive a prophylactic treatment in South Africa (control group). From day seven to week 50, MCN decreased with a median of 41.7% (interquartile range, IQR: 12.1; 64.4) in the PrEP group. The proportion of children with mtDNA depletion was not significantly different between the two prophylactic regimens. Poisson regressions showed that LPV/r and 3TC were associated with mtDNA depletion (reference: control group; LPV/r: PR = 1.75 (CI95%: 1.15–2.68), p < 0.01; 3TC: PR = 1.54 (CI95%: 1.00–2.37), p = 0.05). Moreover, the proportion of children with MDD was unexpectedly high before randomisation in both groups. Long-term health impacts of these mitochondrial DNA parameters should be investigated further for both CHEU and HIV-infected children receiving LPV/r- or 3TC- based regimens.http://www.mdpi.com/journal/jcmpm2021Paediatrics and Child Healt
    corecore