2,464 research outputs found

    Computational Complexity of interacting electrons and fundamental limitations of Density Functional Theory

    Get PDF
    One of the central problems in quantum mechanics is to determine the ground state properties of a system of electrons interacting via the Coulomb potential. Since its introduction by Hohenberg, Kohn, and Sham, Density Functional Theory (DFT) has become the most widely used and successful method for simulating systems of interacting electrons, making their original work one of the most cited in physics. In this letter, we show that the field of computational complexity imposes fundamental limitations on DFT, as an efficient description of the associated universal functional would allow to solve any problem in the class QMA (the quantum version of NP) and thus particularly any problem in NP in polynomial time. This follows from the fact that finding the ground state energy of the Hubbard model in an external magnetic field is a hard problem even for a quantum computer, while given the universal functional it can be computed efficiently using DFT. This provides a clear illustration how the field of quantum computing is useful even if quantum computers would never be built.Comment: 8 pages, 3 figures. v2: Version accepted at Nature Physics; differs significantly from v1 (including new title). Includes an extra appendix (not contained in the journal version) on the NP-completeness of Hartree-Fock, which is taken from v

    Parameterized Complexity of Asynchronous Border Minimization

    Full text link
    Microarrays are research tools used in gene discovery as well as disease and cancer diagnostics. Two prominent but challenging problems related to microarrays are the Border Minimization Problem (BMP) and the Border Minimization Problem with given placement (P-BMP). In this paper we investigate the parameterized complexity of natural variants of BMP and P-BMP under several natural parameters. We show that BMP and P-BMP are in FPT under the following two combinations of parameters: 1) the size of the alphabet (c), the maximum length of a sequence (string) in the input (l) and the number of rows of the microarray (r); and, 2) the size of the alphabet and the size of the border length (o). Furthermore, P-BMP is in FPT when parameterized by c and l. We complement our tractability results with corresponding hardness results

    Neo-Aristotelian Naturalism and the Evolutionary Objection: Rethinking the Relevance of Empirical Science

    Get PDF
    Neo-Aristotelian metaethical naturalism is a modern attempt at naturalizing ethics using ideas from Aristotle’s teleological metaphysics. Proponents of this view argue that moral virtue in human beings is an instance of natural goodness, a kind of goodness supposedly also found in the realm of non-human living things. Many critics question whether neo-Aristotelian naturalism is tenable in light of modern evolutionary biology. Two influential lines of objection have appealed to an evolutionary understanding of human nature and natural teleology to argue against this view. In this paper, I offer a reconstruction of these two seemingly different lines of objection as raising instances of the same dilemma, giving neo-Aristotelians a choice between contradicting our considered moral judgment and abandoning metaethical naturalism. I argue that resolving the dilemma requires showing a particular kind of continuity between the norms of moral virtue and norms that are necessary for understanding non-human living things. I also argue that in order to show such a continuity, neo-Aristotelians need to revise the relationship they adopt with empirical science and acknowledge that the latter is relevant to assessing their central commitments regarding living things. Finally, I argue that to move this debate forward, both neo-Aristotelians and their critics should pay attention to recent work on the concept of organism in evolutionary and developmental biology

    A Multivariate Approach for Checking Resiliency in Access Control

    Get PDF
    In recent years, several combinatorial problems were introduced in the area of access control. Typically, such problems deal with an authorization policy, seen as a relation URU×RUR \subseteq U \times R, where (u,r)UR(u, r) \in UR means that user uu is authorized to access resource rr. Li, Tripunitara and Wang (2009) introduced the Resiliency Checking Problem (RCP), in which we are given an authorization policy, a subset of resources PRP \subseteq R, as well as integers s0s \ge 0, d1d \ge 1 and t1t \geq 1. It asks whether upon removal of any set of at most ss users, there still exist dd pairwise disjoint sets of at most tt users such that each set has collectively access to all resources in PP. This problem possesses several parameters which appear to take small values in practice. We thus analyze the parameterized complexity of RCP with respect to these parameters, by considering all possible combinations of P,s,d,t|P|, s, d, t. In all but one case, we are able to settle whether the problem is in FPT, XP, W[2]-hard, para-NP-hard or para-coNP-hard. We also consider the restricted case where s=0s=0 for which we determine the complexity for all possible combinations of the parameters

    Consumer credit in comparative perspective

    Full text link
    We review the literature in sociology and related fields on the fast global growth of consumer credit and debt and the possible explanations for this expansion. We describe the ways people interact with the strongly segmented consumer credit system around the world—more specifically, the way they access credit and the way they are held accountable for their debt. We then report on research on two areas in which consumer credit is consequential: its effects on social relations and on physical and mental health. Throughout the article, we point out national variations and discuss explanations for these differences. We conclude with a brief discussion of the future tasks and challenges of comparative research on consumer credit.Accepted manuscrip

    Distribution of Capillary Transit Times in Isolated Lungs of Oxygen-Tolerant Rats

    Get PDF
    Rats pre-exposed to 85% O2 for 5–7 days tolerate the otherwise lethal effects of 100% O2. The objective was to evaluate the effect of rat exposure to 85% O2 for 7 days on lung capillary mean transit time (t¯c) and distribution of capillary transit times (h c(t)). This information is important for subsequent evaluation of the effect of this hyperoxia model on the redox metabolic functions of the pulmonary capillary endothelium. The venous concentration vs. time outflow curves of fluorescein isothiocyanate labeled dextran (FITC-dex), an intravascular indicator, and coenzyme Q1 hydroquinone (CoQ1H2), a compound which rapidly equilibrates between blood and tissue on passage through the pulmonary circulation, were measured following their bolus injection into the pulmonary artery of isolated perfused lungs from rats exposed to room air (normoxic) or 85% O2 for 7 days (hyperoxic). The moments (mean transit time and variance) of the measured FITC-dex and CoQ1H2 outflow curves were determined for each lung, and were then used in a mathematical model [Audi et al. J. Appl. Physiol. 77: 332–351, 1994] to estimate t¯c and the relative dispersion (RDc) of h c(t). Data analysis reveals that exposure to hyperoxia decreases lung t¯c by 42% and increases RDc, a measure h c(t) heterogeneity, by 40%

    Search for the standard model Higgs boson at LEP

    Get PDF

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Organizational factors and depression management in community-based primary care settings

    Get PDF
    Abstract Background Evidence-based quality improvement models for depression have not been fully implemented in routine primary care settings. To date, few studies have examined the organizational factors associated with depression management in real-world primary care practice. To successfully implement quality improvement models for depression, there must be a better understanding of the relevant organizational structure and processes of the primary care setting. The objective of this study is to describe these organizational features of routine primary care practice, and the organization of depression care, using survey questions derived from an evidence-based framework. Methods We used this framework to implement a survey of 27 practices comprised of 49 unique offices within a large primary care practice network in western Pennsylvania. Survey questions addressed practice structure (e.g., human resources, leadership, information technology (IT) infrastructure, and external incentives) and process features (e.g., staff performance, degree of integrated depression care, and IT performance). Results The results of our survey demonstrated substantial variation across the practice network of organizational factors pertinent to implementation of evidence-based depression management. Notably, quality improvement capability and IT infrastructure were widespread, but specific application to depression care differed between practices, as did coordination and communication tasks surrounding depression treatment. Conclusions The primary care practices in the network that we surveyed are at differing stages in their organization and implementation of evidence-based depression management. Practical surveys such as this may serve to better direct implementation of these quality improvement strategies for depression by improving understanding of the organizational barriers and facilitators that exist within both practices and practice networks. In addition, survey information can inform efforts of individual primary care practices in customizing intervention strategies to improve depression management.http://deepblue.lib.umich.edu/bitstream/2027.42/78269/1/1748-5908-4-84.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78269/2/1748-5908-4-84-S1.PDFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78269/3/1748-5908-4-84.pdfPeer Reviewe

    How the oxygen tolerance of a [NiFe]-hydrogenase depends on quaternary structure

    Get PDF
    ‘Oxygen-tolerant’ [NiFe]-hydrogenases can catalyze H(2) oxidation under aerobic conditions, avoiding oxygenation and destruction of the active site. In one mechanism accounting for this special property, membrane-bound [NiFe]-hydrogenases accommodate a pool of electrons that allows an O(2) molecule attacking the active site to be converted rapidly to harmless water. An important advantage may stem from having a dimeric or higher-order quaternary structure in which the electron-transfer relay chain of one partner is electronically coupled to that in the other. Hydrogenase-1 from E. coli has a dimeric structure in which the distal [4Fe-4S] clusters in each monomer are located approximately 12 Å apart, a distance conducive to fast electron tunneling. Such an arrangement can ensure that electrons from H(2) oxidation released at the active site of one partner are immediately transferred to its counterpart when an O(2) molecule attacks. This paper addresses the role of long-range, inter-domain electron transfer in the mechanism of O(2)-tolerance by comparing the properties of monomeric and dimeric forms of Hydrogenase-1. The results reveal a further interesting advantage that quaternary structure affords to proteins
    corecore