1,106 research outputs found

    Structured lexical similarity via convolution Kernels on dependency trees

    Get PDF
    A central topic in natural language process-ing is the design of lexical and syntactic fea-tures suitable for the target application. In this paper, we study convolution dependency tree kernels for automatic engineering of syntactic and semantic patterns exploiting lexical simi-larities. We define efficient and powerful ker-nels for measuring the similarity between de-pendency structures, whose surface forms of the lexical nodes are in part or completely dif-ferent. The experiments with such kernels for question classification show an unprecedented results, e.g. 41 % of error reduction of the for-mer state-of-the-art. Additionally, semantic role classification confirms the benefit of se-mantic smoothing for dependency kernels.

    A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    Get PDF
    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project ‘Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European regio

    Fourier transform for quantum DD-modules via the punctured torus mapping class group

    Get PDF
    We construct a certain cross product of two copies of the braided dual H~\tilde H of a quasitriangular Hopf algebra HH, which we call the elliptic double EHE_H, and which we use to construct representations of the punctured elliptic braid group extending the well-known representations of the planar braid group attached to HH. We show that the elliptic double is the universal source of such representations. We recover the representations of the punctured torus braid group obtained in arXiv:0805.2766, and hence construct a homomorphism to the Heisenberg double DHD_H, which is an isomorphism if HH is factorizable. The universal property of EHE_H endows it with an action by algebra automorphisms of the mapping class group SL2(Z)~\widetilde{SL_2(\mathbb{Z})} of the punctured torus. One such automorphism we call the quantum Fourier transform; we show that when H=Uq(g)H=U_q(\mathfrak{g}), the quantum Fourier transform degenerates to the classical Fourier transform on D(g)D(\mathfrak{g}) as q1q\to 1.Comment: 12 pages, 1 figure. Final version, to appear in Quantum Topolog

    Platelet isoprostane overproduction in diabetic patients treated with aspirin

    Get PDF
    Aspirin modestly influences cardiovascular events in patients with type 2 diabetes mellitus (T2DM), but the reason is unclear. The aim of the study was to determine whether in T2DM patients aspirin enhances platelet isoprostanes, which are eicosanoids with proaggregating properties derived from arachidonic acid oxidation by platelet NOX2, the catalytic subunit of reduced NAD phosphate oxidase. A cross-sectional study was performed comparing T2DM patients, treated (n = 50) or not treated (n = 50) with 100 mg/day aspirin, with 100 nondiabetic patients, matched for age, sex, atherosclerosis risk factors, and aspirin treatment. A short-term (7 days) treatment with 100 mg/day aspirin also was performed in 36 aspirin-free diabetic and nondiabetic patients. Higher platelet recruitment, platelet isoprostane, and NOX2 activation was found in diabetic versus nondiabetic patients and in aspirin-treated diabetic patients versus nontreated patients (P < 0.001). Platelet thromboxane (Tx) A(2) (P < 0.001) was inhibited in all aspirin-treated patients. In the interventional study, aspirin similarly inhibited platelet TxA(2) in diabetic and nondiabetic patients (P < 0.001). Platelet recruitment, isoprostane levels, and NOX2 activation showed a parallel increase in diabetic patients (P < 0.001) and no changes in nondiabetic patients. These findings suggest that in aspirin-treated diabetic patients, oxidative stress-mediated platelet isoprostane overproduction is associated with enhanced platelet recruitment, an effect that mitigates aspirin-mediated TxA(2) inhibition

    Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings

    Get PDF
    Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating

    Some Findings Concerning Requirements in Agile Methodologies

    Get PDF
    gile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies

    Reviewing, indicating, and counting books for modern research evaluation systems

    Get PDF
    In this chapter, we focus on the specialists who have helped to improve the conditions for book assessments in research evaluation exercises, with empirically based data and insights supporting their greater integration. Our review highlights the research carried out by four types of expert communities, referred to as the monitors, the subject classifiers, the indexers and the indicator constructionists. Many challenges lie ahead for scholars affiliated with these communities, particularly the latter three. By acknowledging their unique, yet interrelated roles, we show where the greatest potential is for both quantitative and qualitative indicator advancements in book-inclusive evaluation systems.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M. (2018). Springer Handbook of Science and Technology Indicators. Springer Some corrections made in subsection 'Publisher prestige or quality
    corecore