321 research outputs found
Some , And Possibly All, Scalar Inferences Are Not Delayed: Evidence For Immediate Pragmatic Enrichment
Scalar inferences are commonly generated when a speaker uses a weaker expression rather than a stronger alternative, e.g., John ate some of the apples implies that he did not eat them all. This article describes a visual-world study investigating how and when perceivers compute these inferences. Participants followed spoken instructions containing the scalar quantifier some directing them to interact with one of several referential targets (e.g., Click on the girl who has some of the balloons). Participants fixated on the target compatible with the implicated meaning of some and avoided a competitor compatible with the literal meaning prior to a disambiguating noun. Further, convergence on the target was as fast for some as for the non-scalar quantifiers none and all. These findings indicate that the scalar inference is computed immediately and is not delayed relative to the literal interpretation of some. It is argued that previous demonstrations that scalar inferences increase processing time are not necessarily due to delays in generating the inference itself, but rather arise because integrating the interpretation of the inference with relevant information in the context may require additional time. With sufficient contextual support, processing delays disappear
Raising argument strength using negative evidence: A constraint on models of induction
Both intuitively, and according to similarity-based theories of induction, relevant evidence raises argument strength when it is positive and lowers it when it is negative. In three experiments, we tested the hypothesis that argument strength can actually increase when negative evidence is introduced. Two kinds of argument were compared through forced choice or sequential evaluation: single positive arguments (e.g., “Shostakovich’s music causes alpha waves in the brain; therefore, Bach’s music causes alpha waves in the brain”) and double mixed arguments (e.g., “Shostakovich’s music causes alpha waves in the brain, X’s music DOES NOT; therefore, Bach’s music causes alpha waves in the brain”). Negative evidence in the second premise lowered credence when it applied to an item X from the same subcategory (e.g., Haydn) and raised it when it applied to a different subcategory (e.g., AC/DC). The results constitute a new constraint on models of induction
Currencies and Prices in 3rd and 4th Century Palestine and Their Implications for Roman Economic History.
The following study is an attempt to throw further light on Roman economic history of the III and IV cents. by drawing upon the Palestinian source-material of the period. Clearly, it is no more than a beginning in this direction, and makes no claims to being exhaustive either in the collection of material or in the analysis thereof. In the first section lists of Palestinian prices of different commodities are set out in chronological order, and compared with their Egyptian parallels. Babylonian material (analysed in Appendix C) is likewise presented. There follows a discussion of the monetary terminology of the period, and there certain semantic changes are noted and inferences of economic significance drawn. With the clarification of these terms, some observations are made on the patterns of III cent. monetary developments, and the nature of its price-levels. A series of legal texts are next analysed and it is shown that they reflect the change from a silver to a gold standard, via a transitionary period of economic instability and confusion. Thereafter follows an analysis of IV cent. Palestinian price-levels, and these are compared with the Egyptian evidence. It is suggested that internal discrepancies and apparent differences are to be explained on terminological grounds. In the final section, certain questions are raised concerning the chronological pattern of the III cent. economic developments, and some painters to the answers hazarded. To end, a very brief and concentrated description of the social conditions of the times (viewed Partially As implications of the economic development) is given, primarily to indicate the possible range of the sources, their ability to illumine dark periods, and the embryonic-ness of these studies
Recommended from our members
The metabolome regulates the epigenetic landscape during naive-to-primed human embryonic stem cell transition.
For nearly a century developmental biologists have recognized that cells from embryos can differ in their potential to differentiate into distinct cell types. Recently, it has been recognized that embryonic stem cells derived from both mice and humans exhibit two stable yet epigenetically distinct states of pluripotency: naive and primed. We now show that nicotinamide N-methyltransferase (NNMT) and the metabolic state regulate pluripotency in human embryonic stem cells (hESCs). Specifically, in naive hESCs, NNMT and its enzymatic product 1-methylnicotinamide are highly upregulated, and NNMT is required for low S-adenosyl methionine (SAM) levels and the H3K27me3 repressive state. NNMT consumes SAM in naive cells, making it unavailable for histone methylation that represses Wnt and activates the HIF pathway in primed hESCs. These data support the hypothesis that the metabolome regulates the epigenetic landscape of the earliest steps in human development
Stroke lesion size:Still a useful biomarker for stroke severity and outcome in times of high-dimensional models
BACKGROUND
The volumetric size of a brain lesion is a frequently used stroke biomarker. It stands out among most imaging biomarkers for being a one-dimensional variable that is applicable in simple statistical models. In times of machine learning algorithms, the question arises of whether such a simple variable is still useful, or whether high-dimensional models on spatial lesion information are superior.
METHODS
We included 753 first-ever anterior circulation ischemic stroke patients (age 68.4±15.2 years; NIHSS at 24 h 4.4±5.1; modified Rankin Scale (mRS) at 3-months median[IQR] 1[0.75;3]) and traced lesions on diffusion-weighted MRI. In an out-of-sample model validation scheme, we predicted stroke severity as measured by NIHSS 24 h and functional stroke outcome as measured by mRS at 3 months either from spatial lesion features or lesion size.
RESULTS
For stroke severity, the best regression model based on lesion size performed significantly above chance (p < 0.0001) with R2 = 0.322, but models with spatial lesion features performed significantly better with R2 = 0.363 (t(752) = 2.889; p = 0.004). For stroke outcome, the best classification model based on lesion size again performed significantly above chance (p < 0.0001) with an accuracy of 62.8%, which was not different from the best model with spatial lesion features (62.6%, p = 0.80). With smaller training data sets of only 150 or 50 patients, the performance of high-dimensional models with spatial lesion features decreased up to the point of being equivalent or even inferior to models trained on lesion size. The combination of lesion size and spatial lesion features in one model did not improve predictions.
CONCLUSIONS
Lesion size is a decent biomarker for stroke outcome and severity that is slightly inferior to spatial lesion features but is particularly suited in studies with small samples. When low-dimensional models are desired, lesion size provides a viable proxy biomarker for spatial lesion features, whereas high-precision prediction models in personalised prognostic medicine should operate with high-dimensional spatial imaging features in large samples
Comparative effectiveness of less commonly used systemic monotherapies and common combination therapies for moderate to severe psoriasis in the clinical setting.
BACKGROUND: The effectiveness of psoriasis therapies in real-world settings remains relatively unknown.
OBJECTIVE: We sought to compare the effectiveness of less commonly used systemic therapies and commonly used combination therapies for psoriasis.
METHODS: This was a multicenter cross-sectional study of 203 patients with plaque psoriasis receiving less common systemic monotherapy (acitretin, cyclosporine, or infliximab) or common combination therapies (adalimumab, etanercept, or infliximab and methotrexate) compared with 168 patients receiving methotrexate evaluated at 1 of 10 US outpatient dermatology sites participating in the Dermatology Clinical Effectiveness Research Network.
RESULTS: In adjusted analyses, patients on acitretin (relative response rate 2.01; 95% confidence interval [CI] 1.18-3.41), infliximab (relative response rate 1.93; 95% CI 1.26-2.98), adalimumab and methotrexate (relative response rate 3.04; 95% CI 2.12-4.36), etanercept and methotrexate (relative response rate 2.22; 95% CI 1.25-3.94), and infliximab and methotrexate (relative response rate 1.72; 95% CI 1.10-2.70) were more likely to have clear or almost clear skin compared with patients on methotrexate. There were no differences among treatments when response rate was defined by health-related quality of life.
LIMITATIONS: Single time point assessment may result in overestimation of effectiveness.
CONCLUSIONS: The efficacy of therapies in clinical trials may overestimate their effectiveness as used in clinical practice. Although physician-reported relative response rates were different among therapies, absolute differences were small and did not correspond to differences in patient-reported outcomes
Automated System Identification for Satellite Attitude Control
A novel approach to on-obit system identification of satellite attitude control dynamics is presented. The approach is fully automated and will thus enable a variety of satellite applications, including high-performance proliferated constellations and modular payloads. The key enabling feature of the approach is the ability to estimate the uncertainty in the model and then perform additional data collections specifically to reduce the uncertainty. A prototype software implementation of the algorithm accurately estimated multiple structural modes in a CubeSat simulation and a CubeSat reaction wheel testbed in preparation for an on-orbit demonstration as part of the The Aerospace Corporation’s Slingshot 1 mission
Optimal Relevance in Imperfect Information Games
To help incorporate natural language into economic theory, this paper does two things. First, the paper extends to imperfect information games an equilibrium concept developed for incomplete information games, so natural language can be formalized as a vehicle to convey information about actions as well as types. This equilibrium concept is specific to language games, because information is conveyed by the sender through the message's literal meaning. Second, the paper proposes an equilibrium refinement which selects the sender's most preferred equilibrium. The refinement captures the notion that the speaker seeks to improve its status quo, aiming at optimal relevance. Explicit coordination through verbal communication parallels the idea of implicit coordination through focal points
En4U - Entwicklungspfade eines dezentralen Energiesystems im Zusammenspiel der Entscheidungen privater und kommerzieller Energieakteure unter Unsicherheit
Das Projekt EN4U zielt darauf ab, Unsicherheiten im deutschen Energiesystem zu verstehen und zu quantifizieren, indem es verschiedene Modelle miteinander koppelt. Ein stochastisches Optimierungsmodell (ESOM) bestimmt den Kraftwerkspark für ein Szenariojahr, während eine agentenbasierte Simulation (ABMS) den betriebswirtschaftlichen Einsatz und den Strompreis berechnet. So werden Photovoltaik mit Speicher, E-Pkw und Wärmepumpen als optimierte Mikromodelle auf Haushaltsebene durch maschinelles Lernen abstrahiert. So können die Investitionsentscheidungen von Haushalten bis 2045 prognostiziert werden.
Es ergeben sich fünf zentrale Ergebnisse:
1. Maschinelles Lernen: Algorithmen wie LSTM sagen den aggregierten Stromverbrauch von Photovoltaik, Wärmepumpen und E-Fahrzeugen mit einem mittleren Fehler von ca. 680 Euro für ein Jahr relativ genau voraus.
2. Modellkopplung: Die Kombination von ESOM und ABMS (AMIRIS) liefert robuste Ergebnisse trotz hoher Unsicherheiten. In vielen Szenarien zeigt sich eine Reduktion von Öl und ein Anstieg der Wind- (81-87%) und Erdgas-Kapazitäten (15-18%). Die Gesamtkapazität des Energieportfolios liegt im 10. Jahr bei 293-295 GW.
3. Diffusionsmodell: Das Diffusionsmodell prognostiziert die Verbreitung von PVS, EVs und Wärmepumpen bis 2045. Bis 2045 wird eine kumulierte PV-Kapazität von 105,3 GW und eine Speicherkapazität von 70,6 GWh erwartet.
4. Erweiterung von AMIRIS: Externe ML-Modelle können als Agenten in AMIRIS integriert werden, um zusätzlich Flexibilitäten wie Wärmepumpen, E-Autos und PVS abzubilden.
5. Reaktion auf Strompreissignale: Modelle für E-Pkw, Wärmepumpen und PV-Anlagen reagieren auf Strompreissignale. E-Pkw-Lademuster hängen vom Haushaltstyp ab, und Wärmepumpen können unter Real-Time-Pricing (RTP) die Restlast um 3 bis 5 GW bis 2040 reduzieren, mit Einsparungen von 6% bis 27% bei den Stromkosten, die durch Eigenverbrauch von PV weiter erhöht werden
- …
