1,186 research outputs found

    Quantum Holographic Encoding in a Two-dimensional Electron Gas

    Full text link
    The advent of bottom-up atomic manipulation heralded a new horizon for attainable information density, as it allowed a bit of information to be represented by a single atom. The discrete spacing between atoms in condensed matter has thus set a rigid limit on the maximum possible information density. While modern technologies are still far from this scale, all theoretical downscaling of devices terminates at this spatial limit. Here, however, we break this barrier with electronic quantum encoding scaled to subatomic densities. We use atomic manipulation to first construct open nanostructures--"molecular holograms"--which in turn concentrate information into a medium free of lattice constraints: the quantum states of a two-dimensional degenerate Fermi gas of electrons. The information embedded in the holograms is transcoded at even smaller length scales into an atomically uniform area of a copper surface, where it is densely projected into both two spatial degrees of freedom and a third holographic dimension mapped to energy. In analogy to optical volume holography, this requires precise amplitude and phase engineering of electron wavefunctions to assemble pages of information volumetrically. This data is read out by mapping the energy-resolved electron density of states with a scanning tunnelling microscope. As the projection and readout are both extremely near-field, and because we use native quantum states rather than an external beam, we are not limited by lensing or collimation and can create electronically projected objects with features as small as ~0.3 nm. These techniques reach unprecedented densities exceeding 20 bits/nm2 and place tens of bits into a single fermionic state.Comment: Published online 25 January 2009 in Nature Nanotechnology; 12 page manuscript (including 4 figures) + 2 page supplement (including 1 figure); supplementary movie available at http://mota.stanford.ed

    The Hubble Constant

    Get PDF
    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0H_0 values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200

    The role of Comprehension in Requirements and Implications for Use Case Descriptions

    Get PDF
    Within requirements engineering it is generally accepted that in writing specifications (or indeed any requirements phase document), one attempts to produce an artefact which will be simple to comprehend for the user. That is, whether the document is intended for customers to validate requirements, or engineers to understand what the design must deliver, comprehension is an important goal for the author. Indeed, advice on producing ‘readable’ or ‘understandable’ documents is often included in courses on requirements engineering. However, few researchers, particularly within the software engineering domain, have attempted either to define or to understand the nature of comprehension and it’s implications for guidance on the production of quality requirements. Therefore, this paper examines thoroughly the nature of textual comprehension, drawing heavily from research in discourse process, and suggests some implications for requirements (and other) software documentation. In essence, we find that the guidance on writing requirements, often prevalent within software engineering, may be based upon assumptions which are an oversimplification of the nature of comprehension. Hence, the paper examines guidelines which have been proposed, in this case for use case descriptions, and the extent to which they agree with discourse process theory; before suggesting refinements to the guidelines which attempt to utilise lessons learned from our richer understanding of the underlying discourse process theory. For example, we suggest subtly different sets of writing guidelines for the different tasks of requirements, specification and design

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Observation of associated near-side and away-side long-range correlations in √sNN=5.02  TeV proton-lead collisions with the ATLAS detector

    Get PDF
    Two-particle correlations in relative azimuthal angle (Δϕ) and pseudorapidity (Δη) are measured in √sNN=5.02  TeV p+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1  μb-1 of data as a function of transverse momentum (pT) and the transverse energy (ΣETPb) summed over 3.1<η<4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2<|Δη|<5) “near-side” (Δϕ∼0) correlation that grows rapidly with increasing ΣETPb. A long-range “away-side” (Δϕ∼π) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small ΣETPb, is found to match the near-side correlation in magnitude, shape (in Δη and Δϕ) and ΣETPb dependence. The resultant Δϕ correlation is approximately symmetric about π/2, and is consistent with a dominant cos⁡2Δϕ modulation for all ΣETPb ranges and particle pT

    Functional redundancy between Apc and Apc2 regulates tissue homeostasis and prevents tumorigenesis in murine mammary epithelium

    Get PDF
    Aberrant Wnt signaling within breast cancer is associated with poor prognosis, but regulation of this pathway in breast tissue remains poorly understood and the consequences of immediate or long-term dysregulation remain elusive. The exact contribution of the Wnt-regulating proteins adenomatous polyposis coli (APC) and APC2 in the pathogenesis of human breast cancer are ill-defined, but our analysis of publically available array data sets indicates that tumors with concomitant low expression of both proteins occurs more frequently in the ‘triple negative’ phenotype, which is a subtype of breast cancer with particularly poor prognosis. We have used mouse transgenics to delete Apc and/or Apc2 from mouse mammary epithelium to elucidate the significance of these proteins in mammary homeostasis and delineate their influences on Wnt signaling and tumorigenesis. Loss of either protein alone failed to affect Wnt signaling levels or tissue homeostasis. Strikingly, concomitant loss led to local disruption of β-catenin status, disruption in epithelial integrity, cohesion and polarity, increased cell division and a distinctive form of ductal hyperplasia with ‘squamoid’ ghost cell nodules in young animals. Upon aging, the development of Wnt activated mammary carcinomas with squamous differentiation was accompanied by a significantly reduced survival. This novel Wnt-driven mammary tumor model highlights the importance of functional redundancies existing between the Apc proteins both in normal homeostasis and in tumorigenesis

    Search for R-parity-violating supersymmetry in events with four or more leptons in sqrt(s) =7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for new phenomena in final states with four or more leptons (electrons or muons) is presented. The analysis is based on 4.7 fb−1 of s=7  TeV \sqrt{s}=7\;\mathrm{TeV} proton-proton collisions delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in two signal regions: one that requires moderate values of missing transverse momentum and another that requires large effective mass. The results are interpreted in a simplified model of R-parity-violating supersymmetry in which a 95% CL exclusion region is set for charged wino masses up to 540 GeV. In an R-parity-violating MSUGRA/CMSSM model, values of m 1/2 up to 820 GeV are excluded for 10 < tan β < 40
    corecore