5,074 research outputs found

    Combinatorial species and graph enumeration

    Full text link
    In enumerative combinatorics, it is often a goal to enumerate both labeled and unlabeled structures of a given type. The theory of combinatorial species is a novel toolset which provides a rigorous foundation for dealing with the distinction between labeled and unlabeled structures. The cycle index series of a species encodes the labeled and unlabeled enumerative data of that species. Moreover, by using species operations, we are able to solve for the cycle index series of one species in terms of other, known cycle indices of other species. Section 3 is an exposition of species theory and Section 4 is an enumeration of point-determining bipartite graphs using this toolset. In Section 5, we extend a result about point-determining graphs to a similar result for point-determining {\Phi}-graphs, where {\Phi} is a class of graphs with certain properties. Finally, Appendix A is an expository on species computation using the software Sage [9] and Appendix B uses Sage to calculate the cycle index series of point-determining bipartite graphs.Comment: 39 pages, 16 figures, senior comprehensive project at Carleton Colleg

    The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis

    Full text link
    The Paulsen problem is a basic open problem in operator theory: Given vectors u1,,unRdu_1, \ldots, u_n \in \mathbb R^d that are ϵ\epsilon-nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors v1,,vnRdv_1, \ldots, v_n \in \mathbb R^d that exactly satisfy the Parseval's condition and the equal norm condition? Given u1,,unu_1, \ldots, u_n, the squared distance (to the set of exact solutions) is defined as infvi=1nuivi22\inf_{v} \sum_{i=1}^n \| u_i - v_i \|_2^2 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any ϵ\epsilon-nearly solution is at most O(poly(d,n,ϵ))O({\rm{poly}}(d,n,\epsilon)) and there are ϵ\epsilon-nearly solutions with squared distance at least Ω(dϵ)\Omega(d\epsilon). The fundamental open question is whether the squared distance can be independent of the number of vectors nn. We answer this question affirmatively by proving that the squared distance of any ϵ\epsilon-nearly solution is O(d13/2ϵ)O(d^{13/2} \epsilon). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any ϵ\epsilon-nearly solution is O(d2nϵ)O(d^2 n \epsilon). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an ϵ\epsilon-nearly solution is O(d5/2ϵ)O(d^{5/2} \epsilon) when nn is large enough and ϵ\epsilon is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm.Comment: Added Subsection 1.4; Incorporated comments and fixed typos; Minor changes in various place

    Grid services for the MAGIC experiment

    Full text link
    Exploring signals from the outer space has become an observational science under fast expansion. On the basis of its advanced technology the MAGIC telescope is the natural building block for the first large scale ground based high energy gamma-ray observatory. The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes spread over Europe. Therefore MAGIC offers the opportunity to use the Grid technology to setup a distributed computational and data intensive analysis system with the nowadays available technology. Benefits of Grid computing for the MAGIC telescope are presented.Comment: 5 pages, 1 figures, to be published in the Proceedings of the 6th International Symposium ''Frontiers of Fundamental and Computational Physics'' (FFP6), Udine (Italy), Sep. 26-29, 200

    Private Multiplicative Weights Beyond Linear Queries

    Full text link
    A wide variety of fundamental data analyses in machine learning, such as linear and logistic regression, require minimizing a convex function defined by the data. Since the data may contain sensitive information about individuals, and these analyses can leak that sensitive information, it is important to be able to solve convex minimization in a privacy-preserving way. A series of recent results show how to accurately solve a single convex minimization problem in a differentially private manner. However, the same data is often analyzed repeatedly, and little is known about solving multiple convex minimization problems with differential privacy. For simpler data analyses, such as linear queries, there are remarkable differentially private algorithms such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS 2010) that accurately answer exponentially many distinct queries. In this work, we extend these results to the case of convex minimization and show how to give accurate and differentially private solutions to *exponentially many* convex minimization problems on a sensitive dataset

    Paradoxes in Fair Computer-Aided Decision Making

    Full text link
    Computer-aided decision making--where a human decision-maker is aided by a computational classifier in making a decision--is becoming increasingly prevalent. For instance, judges in at least nine states make use of algorithmic tools meant to determine "recidivism risk scores" for criminal defendants in sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are "fair" in the sense that they do not discriminate against certain groups (e.g., races) of people. Our main result shows that for "non-trivial" computer-aided decision making, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory. We further provide a complete characterization of situations where fair computer-aided decision making is possible

    Embedding Principal Component Analysis for Data Reductionin Structural Health Monitoring on Low-Cost IoT Gateways

    Get PDF
    Principal component analysis (PCA) is a powerful data reductionmethod for Structural Health Monitoring. However, its computa-tional cost and data memory footprint pose a significant challengewhen PCA has to run on limited capability embedded platformsin low-cost IoT gateways. This paper presents a memory-efficientparallel implementation of the streaming History PCA algorithm.On our dataset, it achieves 10x compression factor and 59x memoryreduction with less than 0.15 dB degradation in the reconstructedsignal-to-noise ratio (RSNR) compared to standard PCA. More-over, the algorithm benefits from parallelization on multiple cores,achieving a maximum speedup of 4.8x on Samsung ARTIK 710

    No measure for culture? Value in the new economy

    Get PDF
    This paper explores articulations of the value of investment in culture and the arts through a critical discourse analysis of policy documents, reports and academic commentary since 1997. It argues that in this period, discourses around the value of culture have moved from a focus on the direct economic contributions of the culture industries to their indirect economic benefits. These indirect benefits are discussed here under three main headings: creativity and innovation, employability, and social inclusion. These are in turn analysed in terms of three forms of capital: human, social and cultural. The paper concludes with an analysis of this discursive shift through the lens of autonomist Marxist concerns with the labour of social reproduction. It is our argument that, in contemporary policy discourses on culture and the arts, the government in the UK is increasingly concerned with the use of culture to form the social in the image of capital. As such, we must turn our attention beyond the walls of the factory in order to understand the contemporary capitalist production of value and resistance to it. </jats:p

    Foundation and empire : a critique of Hardt and Negri

    Get PDF
    In this article, Thompson complements recent critiques of Hardt and Negri's Empire (see Finn Bowring in Capital and Class, no. 83) using the tools of labour process theory to critique the political economy of Empire, and to note its unfortunate similarities to conventional theories of the knowledge economy
    corecore