15,985 research outputs found
Early signaling defects in human T cells anergized by T cell presentation of autoantigen.
Major histocompatibility complex class II-positive human T cell clones are nontraditional antigen-presenting cells (APCs) that are able to simultaneously present and respond to peptide or degraded antigen, but are unable to process intact protein. Although T cell presentation of peptide antigen resulted in a primary proliferative response, T cells that had been previously stimulated by T cells presenting antigen were completely unresponsive to antigen but not to interleukin 2 (IL-2). In contrast, peptide antigen presented by B cells or DR2+ L cell transfectants resulted in T cell activation and responsiveness to restimulation. The anergy induced by T cell presentation of peptide could not be prevented by the addition of either autologous or allogeneic B cells or B7+ DR2+ L cell transfectants, suggesting that the induction of anergy could occur in the presence of costimulation. T cell anergy was induced within 24 h of T cell presentation of antigen and was long lasting. Anergized T cells expressed normal levels of T cell receptor/CD3 but were defective in their ability to release [Ca2+]i to both alpha CD3 and APCs. Moreover, anergized T cells did not proliferate to alpha CD2 monoclonal antibodies or alpha CD3 plus phorbol myristate acetate (PMA), nor did they synthesize IL-2, IL-4, or interferon gamma mRNA in response to either peptide or peptide plus PMA. In contrast, ionomycin plus PMA induced both normal proliferative responses and synthesis of cytokine mRNA, suggesting that the signaling defect in anergized cells occurs before protein kinase C activation and [Ca2+]i release
Complex Independent Component Analysis of Frequency-Domain Electroencephalographic Data
Independent component analysis (ICA) has proven useful for modeling brain and
electroencephalographic (EEG) data. Here, we present a new, generalized method
to better capture the dynamics of brain signals than previous ICA algorithms.
We regard EEG sources as eliciting spatio-temporal activity patterns,
corresponding to, e.g., trajectories of activation propagating across cortex.
This leads to a model of convolutive signal superposition, in contrast with the
commonly used instantaneous mixing model. In the frequency-domain, convolutive
mixing is equivalent to multiplicative mixing of complex signal sources within
distinct spectral bands. We decompose the recorded spectral-domain signals into
independent components by a complex infomax ICA algorithm. First results from a
visual attention EEG experiment exhibit (1) sources of spatio-temporal dynamics
in the data, (2) links to subject behavior, (3) sources with a limited spectral
extent, and (4) a higher degree of independence compared to sources derived by
standard ICA.Comment: 21 pages, 11 figures. Added final journal reference, fixed minor
typo
On the origin of the stellar halo and multiple stellar populations in the globular cluster NGC 1851
We propose that the observed stellar halo around the globular cluster (GC)
NGC 1851 is evidence for its formation in the central region of its defunct
host dwarf galaxy. We numerically investigate the long-term dynamical evolution
of a nucleated dwarf galaxy embedded in a massive dark matter halo under the
strong tidal field of the Galaxy. The dwarf galaxy is assumed to have a stellar
nucleus (or a nuclear star cluster) that could be the progenitor for NGC 1851.
We find that although the dark matter halo and the stellar envelope of the host
dwarf of NGC 1851 can be almost completely stripped during its orbital
evolution around the Galaxy, a minor fraction of stars in the dwarf can remain
trapped by the gravitational field of the nucleus. The stripped nucleus can be
observed as NGC 1851 with no/little dark matter whereas stars around the
nucleus can be observed as a diffuse stellar halo around NGC 1851. The
simulated stellar halo has a symmetric distribution with a power-law density
slope of ~ -2 and shows no tidal tails within ~200pc from NGC 1851. We show
that two GCs can merge with each other to form a new nuclear GC embedded in
field stars owing to the low stellar velocity dispersion of the host dwarf.
This result makes no assumption on the ages and/or chemical abundances of the
two merging GCs. Thus the observed stellar halo and characteristic multiple
stellar populations in NGC 1851 suggest that NGC 1851 could have formed
initially in the central region of an ancient dwarf galaxy.Comment: 15 pages, 12 figures, accepted in MNRA
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
A common problem in astrophysics is determining how bright a source could be
and still not be detected. Despite the simplicity with which the problem can be
stated, the solution involves complex statistical issues that require careful
analysis. In contrast to the confidence bound, this concept has never been
formally analyzed, leading to a great variety of often ad hoc solutions. Here
we formulate and describe the problem in a self-consistent manner. Detection
significance is usually defined by the acceptable proportion of false positives
(the TypeI error), and we invoke the complementary concept of false negatives
(the TypeII error), based on the statistical power of a test, to compute an
upper limit to the detectable source intensity. To determine the minimum
intensity that a source must have for it to be detected, we first define a
detection threshold, and then compute the probabilities of detecting sources of
various intensities at the given threshold. The intensity that corresponds to
the specified TypeII error probability defines that minimum intensity, and is
identified as the upper limit. Thus, an upper limit is a characteristic of the
detection procedure rather than the strength of any particular source and
should not be confused with confidence intervals or other estimates of source
intensity. This is particularly important given the large number of catalogs
that are being generated from increasingly sensitive surveys. We discuss the
differences between these upper limits and confidence bounds. Both measures are
useful quantities that should be reported in order to extract the most science
from catalogs, though they answer different statistical questions: an upper
bound describes an inference range on the source intensity, while an upper
limit calibrates the detection process. We provide a recipe for computing upper
limits that applies to all detection algorithms.Comment: 30 pages, 12 figures, accepted in Ap
M22: A [Fe/H] Abundance Range Revealed
Intermediate resolution spectra at the Ca II triplet have been obtained for
55 candidate red giants in the field of the globular cluster M22 with the
VLT/FORS instrument. Spectra were also obtained for a number of red giants in
standard globular clusters to provide a calibration of the observed line
strengths with overall abundance [Fe/H]. For the 41 M22 member stars that lie
within the V-V_HB bounds of the calibration, we find an abundance distribution
that is substantially broader than that expected from the observed errors
alone. We argue that this broad distribution cannot be the result of
differential reddening. Instead we conclude that, as has long been suspected,
M22 is similar to omega Cen in having an intrinsic dispersion in heavy element
abundance. The observed M22 abundance distribution rises sharply to a peak at
[Fe/H] = -1.9 with a broad tail to higher abundances: the highest abundance
star in our sample has [Fe/H] = -1.45 dex. If the unusual properties of omega
Cen have their origin in a scenario in which the cluster is the remnant nucleus
of a disrupted dwarf galaxy, then such a scenario likely applies also to M22.Comment: 29 pages, 9 figures, accepted for publication in the Astrophysical
Journa
Node-node distance distribution for growing networks
We present the simulation of the time evolution of the distance matrix. The
result is the node-node distance distribution for various kinds of networks.
For the exponential trees, analytical formulas are derived for the moments of
the distance distribution.Comment: presented during the 37-th Polish Physicists' Meeting, Gdansk,
Poland, 15-19 Sep. 2003, 6 pages, 3 figure
Critical animal and media studies: Expanding the understanding of oppression in communication research
Critical and communication studies have traditionally neglected the oppression conducted by humans towards other animals. However, our (mis)treatment of other animals is the result of public consent supported by a morally speciesist-anthropocentric system of values. Speciesism or anthroparchy, as much as any other mainstream ideologies, feeds the media and at the same time is perpetuated by them. The goal of this article is to remedy this neglect by introducing the subdiscipline of Critical Animal and Media Studies. Critical Animal and Media Studies takes inspiration both from critical animal studies – which is so far the most consolidated critical field of research in the social sciences addressing our exploitation of other animals – and from the normative-moral stance rooted in the cornerstones of traditional critical media studies. The authors argue that the Critical Animal and Media Studies approach is an unavoidable step forward for critical media and communication studies to engage with the expanded circle of concerns of contemporary ethical thinking
Guidelines for the recording and evaluation of pharmaco-EEG data in man: the International Pharmaco-EEG Society (IPEG)
The International Pharmaco-EEG Society (IPEG) presents updated guidelines summarising the requirements for the recording and computerised evaluation of pharmaco-EEG data in man. Since the publication of the first pharmaco-EEG guidelines in 1982, technical and data processing methods have advanced steadily, thus enhancing data quality and expanding the palette of tools available to investigate the action of drugs on the central nervous system (CNS), determine the pharmacokinetic and pharmacodynamic properties of novel therapeutics and evaluate the CNS penetration or toxicity of compounds. However, a review of the literature reveals inconsistent operating procedures from one study to another. While this fact does not invalidate results per se, the lack of standardisation constitutes a regrettable shortcoming, especially in the context of drug development programmes. Moreover, this shortcoming hampers reliable comparisons between outcomes of studies from different laboratories and hence also prevents pooling of data which is a requirement for sufficiently powering the validation of novel analytical algorithms and EEG-based biomarkers. The present updated guidelines reflect the consensus of a global panel of EEG experts and are intended to assist investigators using pharmaco-EEG in clinical research, by providing clear and concise recommendations and thereby enabling standardisation of methodology and facilitating comparability of data across laboratories
Prediction of lethal and synthetically lethal knock-outs in regulatory networks
The complex interactions involved in regulation of a cell's function are
captured by its interaction graph. More often than not, detailed knowledge
about enhancing or suppressive regulatory influences and cooperative effects is
lacking and merely the presence or absence of directed interactions is known.
Here we investigate to which extent such reduced information allows to forecast
the effect of a knock-out or a combination of knock-outs. Specifically we ask
in how far the lethality of eliminating nodes may be predicted by their network
centrality, such as degree and betweenness, without knowing the function of the
system. The function is taken as the ability to reproduce a fixed point under a
discrete Boolean dynamics. We investigate two types of stochastically generated
networks: fully random networks and structures grown with a mechanism of node
duplication and subsequent divergence of interactions. On all networks we find
that the out-degree is a good predictor of the lethality of a single node
knock-out. For knock-outs of node pairs, the fraction of successors shared
between the two knocked-out nodes (out-overlap) is a good predictor of
synthetic lethality. Out-degree and out-overlap are locally defined and
computationally simple centrality measures that provide a predictive power
close to the optimal predictor.Comment: published version, 10 pages, 6 figures, 2 tables; supplement at
http://www.bioinf.uni-leipzig.de/publications/supplements/11-01
Characteristics of Real Futures Trading Networks
Futures trading is the core of futures business, and it is considered as one
of the typical complex systems. To investigate the complexity of futures
trading, we employ the analytical method of complex networks. First, we use
real trading records from the Shanghai Futures Exchange to construct futures
trading networks, in which nodes are trading participants, and two nodes have a
common edge if the two corresponding investors appear simultaneously in at
least one trading record as a purchaser and a seller respectively. Then, we
conduct a comprehensive statistical analysis on the constructed futures trading
networks. Empirical results show that the futures trading networks exhibit
features such as scale-free behavior with interesting odd-even-degree
divergence in low-degree regions, small-world effect, hierarchical
organization, power-law betweenness distribution, disassortative mixing, and
shrinkage of both the average path length and the diameter as network size
increases. To the best of our knowledge, this is the first work that uses real
data to study futures trading networks, and we argue that the research results
can shed light on the nature of real futures business.Comment: 18 pages, 9 figures. Final version published in Physica
- …
