990 research outputs found
Influence of mollusk species on marine DELTA R determinations
Radiocarbon ages were measured on replicate samples of burnt grain and 5 mollusk species collected from a
single sealed layer at an archaeological site (Hornish Point) on the west coast of South Uist, Scotland. The aim was to examine
the impact of using different mollusk species on ΔR determinations that are calculated using the paired terrestrial/marine sample
approach. The mollusk species examined inhabit a range of environments and utilize a variety of food sources within the
intertidal zone. Several authors have suggested that these factors may be responsible for observed variations in the 14C activity
of mollusk shells that were contemporaneous in a single location. This study found no significant variation in the <sup>14</sup>C ages of
the mollusk species, and consequently, no significant variation in calculated values of ΔR. The implication is that in an area
where there are no carboniferous rocks or significant local inputs of freshwater to the surface ocean, any of a range of marine
mollusk species can be used in combination with short-lived terrestrial material from the same secure archaeological context
to accurately determine a ΔR value for a particular geographic location and period in time
<sup>14</sup>C AMS at SUERC: improving QA data from the 5 MV tandem AMS and 250 kV SSAMS
In 2003, a National Electrostatics Corporation (NEC) 5MV tandem accelerator mass spectrometer was installed at SUERC, providing the radiocarbon laboratory with 14C measurements to 4–5‰ repeatability. In 2007, a 250kV single-stage accelerator mass spectrometer (SSAMS) was added to provide additional 14C capability and is now the preferred system for 14C analysis. Changes to the technology and to our operations are evident in our copious quality assurance data: typically, we now use the 134-position MC-SNICS source, which is filled to capacity. Measurement of standards shows that spectrometer running without the complication of on-line δ13C evaluation is a good operational compromise. Currently, 3‰ 14C/13C measurements are routinely achieved for samples up to nearly 3 half-lives old by consistent sample preparation and an automated data acquisition algorithm with sample random access for measurement repeats. Background and known-age standard data are presented for the period 2003–2008 for the 5MV system and 2007–2008 for the SSAMS, to demonstrate the improvements in data quality
Targeting lentiviral vectors to antigen-specific immunoglobulins
Gene transfer into B cells by lentivectors can provide an alternative approach to managing B lymphocyte malignancies and autoreactive B cell-mediated autoimmune diseases. These pathogenic B cell Populations can be distinguished by their surface expression of monospecific immunoglobulin. Development of a novel vector system to deliver genes to these specific B cells could improve the safety and efficacy of gene therapy. We have developed an efficient rnethod to target lentivectors to monospecific immunoglobulin-expressing cells in vitro and hi vivo. We were able to incorporate a model antigen CD20 and a fusogenic protein derived from the Sindbis virus as two distinct molecules into the lentiviral Surface. This engineered vector could specifically bind to cells expressing Surface immunoglobulin recognizing CD20 (αCD20), resulting in efficient transduction of target cells in a cognate antigen-dependent manner in vitro, and in vivo in a xenografted tumor model. Tumor suppression was observed in vivo, using the engineered lentivector to deliver a suicide gene to a xenografted tumor expressing αCD20. These results show the feasibility of engineering lentivectors to target immunoglobulin-specific cells to deliver a therapeutic effect. Such targeting lentivectors also Could potentially be used to genetically mark antigen-specific B cells in vivo to study their B cell biology
A web of stakeholders and strategies: A case of broadband diffusion in South Korea
When a new technology is launched, its diffusion becomes an issue of importance. There are various stakeholders that influence diffusion. The question that remains to be determined is their identification and roles. This paper outlines how the strategies pursued by a government acting as the key stakeholder affected the diffusion of a new technology. The analysis is based on a theoretical framework derived from innovation diffusion and stakeholder theories. The empirical evidence comes from a study of broadband development in South Korea. A web of stakeholders and strategies is drawn in order to identify the major stakeholders involved and highlight their relations. The case of South Korea offers implications for other countries that are pursuing broadband diffusion strategies
Review of progress in Fast Ignition
Copyright 2005 American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics. The following article appeared in Physics of Plasmas, 12(5), 057305, 2005 and may be found at http://dx.doi.org/10.1063/1.187124
Encoded Recoupling and Decoupling: An Alternative to Quantum Error Correcting Codes, Applied to Trapped Ion Quantum Computation
A recently developed theory for eliminating decoherence and design
constraints in quantum computers, ``encoded recoupling and decoupling'', is
shown to be fully compatible with a promising proposal for an architecture
enabling scalable ion-trap quantum computation [D. Kielpinski et al., Nature
417, 709 (2002)]. Logical qubits are encoded into pairs of ions. Logic gates
are implemented using the Sorensen-Molmer (SM) scheme applied to pairs of ions
at a time. The encoding offers continuous protection against collective
dephasing. Decoupling pulses, that are also implemented using the SM scheme
directly to the encoded qubits, are capable of further reducing various other
sources of qubit decoherence, such as due to differential dephasing and due to
decohered vibrational modes. The feasibility of using the relatively slow SM
pulses in a decoupling scheme quenching the latter source of decoherence
follows from the observed 1/f spectrum of the vibrational bath.Comment: 12 pages, no figure
Attentive Learning of Sequential Handwriting Movements: A Neural Network Model
Defense Advanced research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-1309); National Science Foundation (IRI-97-20333); National Institutes of Health (I-R29-DC02952-01)
HI in the Outskirts of Nearby Galaxies
The HI in disk galaxies frequently extends beyond the optical image, and can
trace the dark matter there. I briefly highlight the history of high spatial
resolution HI imaging, the contribution it made to the dark matter problem, and
the current tension between several dynamical methods to break the disk-halo
degeneracy. I then turn to the flaring problem, which could in principle probe
the shape of the dark halo. Instead, however, a lot of attention is now devoted
to understanding the role of gas accretion via galactic fountains. The current
cold dark matter theory has problems on galactic scales, such as
the core-cusp problem, which can be addressed with HI observations of dwarf
galaxies. For a similar range in rotation velocities, galaxies of type Sd have
thin disks, while those of type Im are much thicker. After a few comments on
modified Newtonian dynamics and on irregular galaxies, I close with statistics
on the HI extent of galaxies.Comment: 38 pages, 17 figures, invited review, book chapter in "Outskirts of
Galaxies", Eds. J. H. Knapen, J. C. Lee and A. Gil de Paz, Astrophysics and
Space Science Library, Springer, in pres
On Quantum Control via Encoded Dynamical Decoupling
I revisit the ideas underlying dynamical decoupling methods within the
framework of quantum information processing, and examine their potential for
direct implementations in terms of encoded rather than physical degrees of
freedom. The usefulness of encoded decoupling schemes as a tool for engineering
both closed- and open-system encoded evolutions is investigated based on simple
examples.Comment: 12 pages, no figures; REVTeX style. This note collects various
theoretical considerations complementing/motivated by the experimental
demonstration of encoded control by Fortunato et a
The Milky Way Bulge: Observed properties and a comparison to external galaxies
The Milky Way bulge offers a unique opportunity to investigate in detail the
role that different processes such as dynamical instabilities, hierarchical
merging, and dissipational collapse may have played in the history of the
Galaxy formation and evolution based on its resolved stellar population
properties. Large observation programmes and surveys of the bulge are providing
for the first time a look into the global view of the Milky Way bulge that can
be compared with the bulges of other galaxies, and be used as a template for
detailed comparison with models. The Milky Way has been shown to have a
box/peanut (B/P) bulge and recent evidence seems to suggest the presence of an
additional spheroidal component. In this review we summarise the global
chemical abundances, kinematics and structural properties that allow us to
disentangle these multiple components and provide constraints to understand
their origin. The investigation of both detailed and global properties of the
bulge now provide us with the opportunity to characterise the bulge as observed
in models, and to place the mixed component bulge scenario in the general
context of external galaxies. When writing this review, we considered the
perspectives of researchers working with the Milky Way and researchers working
with external galaxies. It is an attempt to approach both communities for a
fruitful exchange of ideas.Comment: Review article to appear in "Galactic Bulges", Editors: Laurikainen
E., Peletier R., Gadotti D., Springer Publishing. 36 pages, 10 figure
- …
