5,819 research outputs found

    Are forwards and backwards digit recall the same? A dual task study of digit recall

    Get PDF
    There is some debate surrounding the cognitive resources underlying backwards digit recall. Some researchers consider it to differ from forwards digit recall due to the involvement of executive control, while others suggest that backwards recall involves visuo-spatial resources. Five experiments therefore investigated the role of executive-attentional and visuo-spatial resources in both forwards and backwards digit recall. In the first, participants completed visuo-spatial 0-back and 2-back tasks during the encoding of information to be remembered. The concurrent tasks did not differentially disrupt performance on backwards digit recall relative to forwards digit recall. Experiment 2 shifted concurrent load to the recall phase instead, and in this case revealed a larger effect of both tasks on backwards recall relative to forwards recall, suggesting that backwards recall may draw on additional resources during the recall phase and that these resources are visuo-spatial in nature. Experiments 3 and 4 then further investigated the role of visual processes in forwards and backwards recall using dynamic visual noise (DVN). In Experiment 3 DVN was presented during encoding of information to be remembered, and had no effect upon performance. However, in Experiment 4 it was presented during the recall phase, and the results provided evidence of a role for visual imagery in backwards digit recall. These results were replicated in Experiment 5 in which the same list length was used for forwards and backwards recall tasks. The findings are discussed in terms of both theoretical and practical implications

    Cloning of terminal transferase cDNA by antibody screening

    Get PDF
    A cDNA library was prepared from a terminal deoxynucleotidyltransferase-containing thymoma in the phage vector λgt11. By screening plaques with anti-terminal transferase antibody, positive clones were identified of which some had β-galactosidase-cDNA fusion proteins identifiable after electrophoretic fractionation by immunoblotting with anti-terminal transferase antibody. The predominant class of cross-hybridizing clones was determined to represent cDNA for terminal transferase by showing that one representative clone hybridized to a 2200-nucleotide mRNA in close-matched enzyme-positive but not to enzyme-negative cells and that the cDNA selected a mRNA that translated to give a protein of the size and antigenic characteristics of terminal transferase. Only a small amount of genomic DNA hybridized to the longest available clone, indicating that the sequence is virtually unique in the mouse genome

    Systematic Renormalization in Hamiltonian Light-Front Field Theory: The Massive Generalization

    Get PDF
    Hamiltonian light-front field theory can be used to solve for hadron states in QCD. To this end, a method has been developed for systematic renormalization of Hamiltonian light-front field theories, with the hope of applying the method to QCD. It assumed massless particles, so its immediate application to QCD is limited to gluon states or states where quark masses can be neglected. This paper builds on the previous work by including particle masses non-perturbatively, which is necessary for a full treatment of QCD. We show that several subtle new issues are encountered when including masses non-perturbatively. The method with masses is algebraically and conceptually more difficult; however, we focus on how the methods differ. We demonstrate the method using massive phi^3 theory in 5+1 dimensions, which has important similarities to QCD.Comment: 7 pages, 2 figures. Corrected error in Eq. (11), v3: Added extra disclaimer after Eq. (2), and some clarification at end of Sec. 3.3. Final published versio

    Relationship between seismicity and geologic structure in the Southern California region

    Get PDF
    Data from 10,126 earthquakes that occurred in the southern California region between 1934 and 1963 have been synthesized in the attempt to understand better their relationship to regional geologic structure, which is here dominated by a system of faults related mainly to the San Andreas system. Most of these faults have been considered “active” from physiographic evidence, but both geologic and short-term seismic criteria for “active” versus “inactive” faults are generally inadequate. Of the large historic earthquakes that have been associated with surficial fault displacements, most and perhaps all were on major throughgoing faults having a previous history of extensive Quaternary displacements. The same relationship holds for most earthquakes down to magnitude 6.0, but smaller shocks are much more randomly spread throughout the region, and most are not clearly associated with any mappable surficial faults. Virtually all areas of high seismicity in this region fall within areas having numerous Quaternary fault scarps, but not all intensely faulted areas have been active during this particular 29-year period. Strain-release maps show high activity in the Salton trough, the Agua Blanca-San Miguel fault region of Baja California, most of the Transverse Ranges, the central Mojave Desert, and the Owens Valley-southern Sierra Nevada region. Areas of low activity include the San Diego region, the western and easternmost Mojave Desert, and the southern San Joaquin Valley. Because these areas also generally lack Quaternary faults, they probably represent truly stable blocks. In contrast, regions of low seismicity during this period that show widespread Quaternary faulting include the San Andreas fault within and north of the Transverse Ranges, the Garlock fault, and several quiescent zones along major faults within otherwise very active regions. We suspect that seismic quiescence in large areas may be temporary and that they represent likely candidates for future large earthquakes. Without more adequate geodetic control, however, it is not known that strain is necessarily accumulating in all of these areas. Even in areas of demonstrated regional shearing, the relative importance of elastic strain accumulation versus fault slippage is unknown, although slippage is clearly not taking place everywhere along major “active” faults of the region. Recurrence curves of earthquake magnitude versus frequency are presented for six tectonically distinct 8500-km^2 areas within the region. They suggest either that an area of this small size or that a sample period of only 29 years is insufficient for establishing valid recurrence expectancies; on this basis the San Andreas fault would be the least hazardous zone of the region, because only a few small earthquakes have occurred here during this particular period. Although recurrence expectancies apparently break down for these smaller areas, historic records suggest that the calculated recurrence rate of 52 years for M = 8.0 earthquakes for the entire region may well be valid. Neither a fault map nor the 29-year seismic record provides sufficient information for detailed seismic zoning maps; not only are many other geologic factors important in determining seismic risk, but the strain-release or epicenter map by itself may give a partially reversed picture of future seismic expectance. Seismic and structural relationships suggest that the fault theory still provides the most satisfactory explanation of earthquakes in this region

    An anomalous extinction law in the Cep OB3b young cluster: Evidence for dust processing during gas dispersal

    Get PDF
    © 2014. The American Astronomical Society. All rights reserved. We determine the extinction law through Cep OB3b, a young cluster of 3000 stars undergoing gas dispersal. The extinction is measured toward 76 background K giants identified with MMT/Hectospec spectra. Color excess ratios were determined toward each of the giants using V and R photometry from the literature, g, r, i, and z photometry from the Sloan Digital Sky Survey and J, H, and Ks photometry from the Two Micron All Sky Survey. These color excess ratios were then used to construct the extinction law through the dusty material associated with Cep OB3b. The extinction law through Cep OB3b is intermediate between the RV = 3.1 and RV = 5 laws commonly used for the diffuse atomic interstellar medium and dense molecular clouds, respectively. The dependence of the extinction law on line-of-sight AV is investigated and we find the extinction law becomes shallower for regions with AV > 2.5 mag. We speculate that the intermediate dust law results from dust processing during the dispersal of the molecular cloud by the cluster.Support for this work was provided by the National Science Foundation award AST-1009564. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation and JPL support from SAO/JPL SV4-74011. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University

    Performance Evaluation of Adaptive Scientific Applications using TAU

    Get PDF
    Fueled by increasing processor speeds and high speed interconnection networks, advances in high performance computer architectures have allowed the development of increasingly complex large scale parallel systems. For computational scientists, programming these systems efficiently is a challenging task. Understanding the performance of their parallel applications i

    Systematic Renormalization in Hamiltonian Light-Front Field Theory

    Get PDF
    We develop a systematic method for computing a renormalized light-front field theory Hamiltonian that can lead to bound states that rapidly converge in an expansion in free-particle Fock-space sectors. To accomplish this without dropping any Fock sectors from the theory, and to regulate the Hamiltonian, we suppress the matrix elements of the Hamiltonian between free-particle Fock-space states that differ in free mass by more than a cutoff. The cutoff violates a number of physical principles of the theory, and thus the Hamiltonian is not just the canonical Hamiltonian with masses and couplings redefined by renormalization. Instead, the Hamiltonian must be allowed to contain all operators that are consistent with the unviolated physical principles of the theory. We show that if we require the Hamiltonian to produce cutoff-independent physical quantities and we require it to respect the unviolated physical principles of the theory, then its matrix elements are uniquely determined in terms of the fundamental parameters of the theory. This method is designed to be applied to QCD, but for simplicity, we illustrate our method by computing and analyzing second- and third-order matrix elements of the Hamiltonian in massless phi-cubed theory in six dimensions.Comment: 47 pages, 6 figures; improved referencing, minor presentation change

    Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses

    Full text link
    In this paper, we systematically investigate both the synfire propagation and firing rate propagation in feedforward neuronal network coupled in an all-to-all fashion. In contrast to most earlier work, where only reliable synaptic connections are considered, we mainly examine the effects of unreliable synapses on both types of neural activity propagation in this work. We first study networks composed of purely excitatory neurons. Our results show that both the successful transmission probability and excitatory synaptic strength largely influence the propagation of these two types of neural activities, and better tuning of these synaptic parameters makes the considered network support stable signal propagation. It is also found that noise has significant but different impacts on these two types of propagation. The additive Gaussian white noise has the tendency to reduce the precision of the synfire activity, whereas noise with appropriate intensity can enhance the performance of firing rate propagation. Further simulations indicate that the propagation dynamics of the considered neuronal network is not simply determined by the average amount of received neurotransmitter for each neuron in a time instant, but also largely influenced by the stochastic effect of neurotransmitter release. Second, we compare our results with those obtained in corresponding feedforward neuronal networks connected with reliable synapses but in a random coupling fashion. We confirm that some differences can be observed in these two different feedforward neuronal network models. Finally, we study the signal propagation in feedforward neuronal networks consisting of both excitatory and inhibitory neurons, and demonstrate that inhibition also plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience (published

    The Isotope Effect in d-Wave Superconductors

    Full text link
    Based on recently proposed anti-ferromagnetic spin fluctuation exchange models for dx2y2d_{x^2-y^2}-superconductors, we show that coupling to harmonic phonons {\it{cannot}} account for the observed isotope effect in the cuprate high-TcT_c materials, whereas coupling to strongly anharmonic multiple-well lattice tunneling modes {\it{can}}. Our results thus point towards a strongly enhanced {\it{effective}} electron-phonon coupling and a possible break-down of Migdal-Eliashberg theory in the cuprates.Comment: 12 pages + 2 figures, Postscript files, all uuencoded Phys. Rev. Lett. (1995, to be published
    corecore