1,712 research outputs found
The associations of adipokines with selected markers of the renin-angiotensinogen-aldosterone system: the multi-ethnic study of atherosclerosis.
Among obese individuals, increased sympathetic nervous system (SNS) activity results in increased renin and aldosterone production, as well as renal tubular sodium reabsorption. This study determined the associations between adipokines and selected measures of the renin-angiotensinogen-aldosterone system (RAAS). The sample consisted of 1970 men and women from the Multi-Ethnic Study of Atherosclerosis who were free of clinical cardiovascular disease at baseline and had blood assayed for adiponectin, leptin, plasma renin activity (PRA) and aldosterone. The mean age was 64.7 years and 50% were female. The mean (s.d.) PRA and aldosterone were 1.45 (0.56) ng ml(-1) and 150.1 (130.5) pg ml(-1), respectively. After multivariable adjustment, a 1-s.d. increment of leptin was associated with a 0.55 ng ml(-1) higher PRA and 8.4 pg ml(-1) higher aldosterone (P<0.01 for both). Although adiponectin was not significantly associated with PRA levels, the same increment in this adipokine was associated with lower aldosterone levels (-5.5 pg ml(-1), P=0.01). Notably, the associations between aldosterone and both leptin and adiponectin were not materially changed with additional adjustment for PRA. Exclusion of those taking antihypertensive medications modestly attenuated the associations. The associations between leptin and both PRA and aldosterone were not different by gender but were significantly stronger among non-Hispanic Whites and Chinese Americans than African and Hispanic Americans (P<0.01). The findings suggest that both adiponectin and leptin may be relevant to blood pressure regulation via the RAAS, in that the associations appear to be robust to antihypertension medication use and that the associations are likely different by ethnicity
The J-triplet Cooper pairing with magnetic dipolar interactions
Recently, cold atomic Fermi gases with the large magnetic dipolar interaction
have been laser cooled down to quantum degeneracy. Different from
electric-dipoles which are classic vectors, atomic magnetic dipoles are
quantum-mechanical matrix operators proportional to the hyperfine-spin of
atoms, thus provide rich opportunities to investigate exotic many-body physics.
Furthermore, unlike anisotropic electric dipolar gases, unpolarized magnetic
dipolar systems are isotropic under simultaneous spin-orbit rotation. These
features give rise to a robust mechanism for a novel pairing symmetry: orbital
p-wave (L=1) spin triplet (S=1) pairing with total angular momentum of the
Cooper pair J=1. This pairing is markedly different from both the He-B
phase in which J=0 and the He- phase in which is not conserved. It
is also different from the p-wave pairing in the single-component electric
dipolar systems in which the spin degree of freedom is frozen
Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
During sleep and awake rest, the hippocampus replays sequences of place cells
that have been activated during prior experiences. These have been interpreted
as a memory consolidation process, but recent results suggest a possible
interpretation in terms of reinforcement learning. The Dyna reinforcement
learning algorithms use off-line replays to improve learning. Under limited
replay budget, a prioritized sweeping approach, which requires a model of the
transitions to the predecessors, can be used to improve performance. We
investigate whether such algorithms can explain the experimentally observed
replays. We propose a neural network version of prioritized sweeping
Q-learning, for which we developed a growing multiple expert algorithm, able to
cope with multiple predecessors. The resulting architecture is able to improve
the learning of simulated agents confronted to a navigation task. We predict
that, in animals, learning the world model should occur during rest periods,
and that the corresponding replays should be shuffled.Comment: Living Machines 2018 (Paris, France
Recommended from our members
Letter processing and font information during reading: beyond distinctiveness, where vision meets design
Letter identification is a critical front end of the
reading process. In general, conceptualizations of the identification process have emphasized arbitrary sets of distinctive features. However, a richer view of letter processing incorporates principles from the field of type design, including an emphasis on uniformities across letters within a font. The importance of uniformities is supported by a small body of research indicating that consistency of font increases letter identification efficiency. We review design concepts and the relevant literature, with the goal of stimulating further thinking about letter processing during reading
Stochastic accumulation of feature information in perception and memory
It is now well established that the time course of perceptual processing influences the first second or so of performance in a wide variety of cognitive tasks. Over the last20 years, there has been a shift from modeling the speed at which a display is processed, to modeling the speed at which different features of the display are perceived and formalizing how this perceptual information is used in decision making. The first of these models(Lamberts, 1995) was implemented to fit the time course of performance in a speeded perceptual categorization task and assumed a simple stochastic accumulation of feature information. Subsequently, similar approaches have been used to model performance in a range of cognitive tasks including identification, absolute identification, perceptual matching, recognition, visual search, and word processing, again assuming a simple stochastic accumulation of feature information from both the stimulus and representations held in memory. These models are typically fit to data from signal-to-respond experiments whereby the effects of stimulus exposure duration on performance are examined, but response times (RTs) and RT distributions have also been modeled. In this article, we review this approach and explore the insights it has provided about the interplay between perceptual processing, memory retrieval, and decision making in a variety of tasks. In so doing, we highlight how such approaches can continue to usefully contribute to our understanding of cognition
Semantic diversity:A measure of contextual variation in word meaning based on latent semantic analysis
Semantic ambiguity is typically measured by summing the number of senses or dictionary definitions that a word has. Such measures are somewhat subjective and may not adequately capture the full extent of variation in word meaning, particularly for polysemous words that can be used in many different ways, with subtle shifts in meaning. Here, we describe an alternative, computationally derived measure of ambiguity based on the proposal that the meanings of words vary continuously as a function of their contexts. On this view, words that appear in a wide range of contexts on diverse topics are more variable in meaning than those that appear in a restricted set of similar contexts. To quantify this variation, we performed latent semantic analysis on a large text corpus to estimate the semantic similarities of different linguistic contexts. From these estimates, we calculated the degree to which the different contexts associated with a given word vary in their meanings. We term this quantity a word's semantic diversity (SemD). We suggest that this approach provides an objective way of quantifying the subtle, context-dependent variations in word meaning that are often present in language. We demonstrate that SemD is correlated with other measures of ambiguity and contextual variability, as well as with frequency and imageability. We also show that SemD is a strong predictor of performance in semantic judgments in healthy individuals and in patients with semantic deficits, accounting for unique variance beyond that of other predictors. SemD values for over 30,000 English words are provided as supplementary materials. © 2012 Psychonomic Society, Inc
Constraining primordial non-Gaussianity with future galaxy surveys
We study the constraining power on primordial non-Gaussianity of future
surveys of the large-scale structure of the Universe for both near-term surveys
(such as the Dark Energy Survey - DES) as well as longer term projects such as
Euclid and WFIRST. Specifically we perform a Fisher matrix analysis forecast
for such surveys, using DES-like and Euclid-like configurations as examples,
and take account of any expected photometric and spectroscopic data. We focus
on two-point statistics and we consider three observables: the 3D galaxy power
spectrum in redshift space, the angular galaxy power spectrum, and the
projected weak-lensing shear power spectrum. We study the effects of adding a
few extra parameters to the basic LCDM set. We include the two standard
parameters to model the current value for the dark energy equation of state and
its time derivative, w_0, w_a, and we account for the possibility of primordial
non-Gaussianity of the local, equilateral and orthogonal types, of parameter
fNL and, optionally, of spectral index n_fNL. We present forecasted constraints
on these parameters using the different observational probes. We show that
accounting for models that include primordial non-Gaussianity does not degrade
the constraint on the standard LCDM set nor on the dark-energy equation of
state. By combining the weak lensing data and the information on projected
galaxy clustering, consistently including all two-point functions and their
covariance, we find forecasted marginalised errors sigma (fNL) ~ 3, sigma
(n_fNL) ~ 0.12 from a Euclid-like survey for the local shape of primordial
non-Gaussianity, while the orthogonal and equilateral constraints are weakened
for the galaxy clustering case, due to the weaker scale-dependence of the bias.
In the lensing case, the constraints remain instead similar in all
configurations.Comment: 20 pages, 10 Figures. Minor modifications; accepted by MNRA
On staying grounded and avoiding Quixotic dead ends
The 15 articles in this special issue on The Representation of Concepts illustrate the rich variety of theoretical positions and supporting research that characterize the area. Although much agreement exists among contributors, much disagreement exists as well, especially about the roles of grounding and abstraction in conceptual processing. I first review theoretical approaches raised in these articles that I believe are Quixotic dead ends, namely, approaches that are principled and inspired but likely to fail. In the process, I review various theories of amodal symbols, their distortions of grounded theories, and fallacies in the evidence used to support them. Incorporating further contributions across articles, I then sketch a theoretical approach that I believe is likely to be successful, which includes grounding, abstraction, flexibility, explaining classic conceptual phenomena, and making contact with real-world situations. This account further proposes that (1) a key element of grounding is neural reuse, (2) abstraction takes the forms of multimodal compression, distilled abstraction, and distributed linguistic representation (but not amodal symbols), and (3) flexible context-dependent representations are a hallmark of conceptual processing
Combining perturbation theories with halo models for the matter bispectrum
We investigate how unified models should be built to be able to predict the
matter-density bispectrum (and power spectrum) from very large to small scales
and that are at the same time consistent with perturbation theory at low
and with halo models at high . We use a Lagrangian framework to decompose
the bispectrum into "3-halo", "2-halo", and "1-halo" contributions, related to
"perturbative" and "non-perturbative" terms. We describe a simple
implementation of this approach and present a detailed comparison with
numerical simulations. We show that the 1-halo and 2-halo contributions contain
counterterms that ensure their decay at low , as required by physical
constraints, and allow a better match to simulations. Contrary to the power
spectrum, the standard 1-loop perturbation theory can be used for the
perturbative 3-halo contribution because it does not grow too fast at high .
Moreover, it is much simpler and more accurate than two resummation schemes
investigated in this paper. We obtain a good agreement with numerical
simulations on both large and small scales, but the transition scales are
poorly described by the simplest implementation. This cannot be amended by
simple modifications to the halo parameters, but we show how it can be
corrected for the power spectrum and the bispectrum through a simple
interpolation scheme that is restricted to this intermediate regime. Then, we
reach an accuracy on the order of 10% on mildly and highly nonlinear scales,
while an accuracy on the order of 1% is obtained on larger weakly nonlinear
scales. This also holds for the real-space two-point correlation function.Comment: 25 page
- …
