4,129 research outputs found

    On large primitive subsets of {1,2,,2n}\{1,2,\ldots,2n\}

    Full text link
    A subset of {1,2,,2n}\{1,2,\ldots,2n\} is said to be primitive if it does not contain any pair of elements (u,v)(u,v) such that uu is a divisor of vv. Let D(n)D(n) denote the number of primitive subsets of {1,2,,2n}\{1,2,\ldots,2n\} with nn elements. Numerical evidence suggests that D(n)D(n) is roughly (1.32)n(1.32)^n. We show that for sufficiently large nn, (1.303...)n<D(n)<(1.408...)n(1.303...)^n < D(n) < (1.408...)^nComment: 5 page

    On The Discrepancy of Quasi-progressions

    Full text link
    The 2-colouring discrepancy of arithmetic progressions is a well-known problem in combinatorial discrepancy theory. In 1964, Roth proved that if each integer from 0 to N is coloured red or blue, there is some arithmetic progression in which the number of reds and the number of blues differ by at least (1/20) N^{1/4}. In 1996, Matousek and Spencer showed that this estimate is sharp up to a constant. The analogous question for homogeneous arithmetic progressions (i.e., the ones containing 0) was raised by Erdos in the 1930s, and it is still not known whether the discrepancy is unbounded. However, it is easy to construct partial colourings with density arbitrarily close to 1 such that all homogeneous arithmetic progressions have bounded discrepancy. A related problem concerns the discrepancy of quasi-progressions. A quasi-progression consists of successive multiples of a real number, with each multiple rounded down to the nearest integer. In 1986, Beck showed that given any 2-colouring, the quasi-progressions corresponding to almost all real numbers in (1, \infty) have discrepancy at least log* N, the inverse of the tower function. We improve the lower bound to (log N)^{1/4 - o(1)}, and also show that there is some quasi-progression with discrepancy at least (1/50) N^{1/6}. Our results remain valid even if the 2-colouring is replaced by a partial colouring of positive density.Comment: 15 page

    Eleven Euclidean Distances are Enough

    Full text link
    The well-known three distance theorem states that there are at most three distinct gaps between consecutive elements in the set of the first n multiples of any real number. We generalise this theorem to higher dimensions under a suitable formulation. The three distance theorem can be thought of as a statement about champions in a tournament. The players in the tournament are edges between pairs of multiples of the given real number, two edges play each other if and only if they overlap, and an edge loses only against edges of shorter length that it plays against. Defeated edges may play (and defeat) other overlapping edges. According to the three distance theorem, there are at most three distinct values for the lengths of undefeated edges. In the plane and in higher dimensions, we consider fractional parts of multiples of a vector of real numbers, two edges play if their projections along any axis overlap, and champions are defined as before. In the plane, there are at most 11 values for the lengths of undefeated edges.Comment: 8 pages, 3 figure

    ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural Projections

    Full text link
    Deep neural networks have become ubiquitous for applications related to visual recognition and language understanding tasks. However, it is often prohibitive to use typical neural networks on devices like mobile phones or smart watches since the model sizes are huge and cannot fit in the limited memory available on such devices. While these devices could make use of machine learning models running on high-performance data centers with CPUs or GPUs, this is not feasible for many applications because data can be privacy sensitive and inference needs to be performed directly "on" device. We introduce a new architecture for training compact neural networks using a joint optimization framework. At its core lies a novel objective that jointly trains using two different types of networks--a full trainer neural network (using existing architectures like Feed-forward NNs or LSTM RNNs) combined with a simpler "projection" network that leverages random projections to transform inputs or intermediate representations into bits. The simpler network encodes lightweight and efficient-to-compute operations in bit space with a low memory footprint. The two networks are trained jointly using backpropagation, where the projection network learns from the full network similar to apprenticeship learning. Once trained, the smaller network can be used directly for inference at low memory and computation cost. We demonstrate the effectiveness of the new approach at significantly shrinking the memory requirements of different types of neural networks while preserving good accuracy on visual recognition and text classification tasks. We also study the question "how many neural bits are required to solve a given task?" using the new framework and show empirical results contrasting model predictive capacity (in bits) versus accuracy on several datasets

    HI absorption spectra for Supernova Remnants in the VGPS survey

    Full text link
    The set of supernova remnants (SNR) from Green's SNR catalog which are found in the VLA Galactic Plane Survey (VGPS) are the objects considered in this study. For these SNR, we extract and analyse HI absorption spectra in a uniform way and construct a catalogue of absorption spectra and distance determinations.Comment: 4 pages, 1 figure, conference proceedings paper for the meeting: Supernova Remnants: An Odyssey in Space after Stellar deat

    Ramsey Functions for Generalized Progressions

    Full text link
    Given positive integers nn and kk, a kk-term semi-progression of scope mm is a sequence (x1,x2,...,xk)(x_1,x_2,...,x_k) such that xj+1xj{d,2d,,md},1jk1x_{j+1} - x_j \in \{d,2d,\ldots,md\}, 1 \le j \le k-1, for some positive integer dd. Thus an arithmetic progression is a semi-progression of scope 11. Let Sm(k)S_m(k) denote the least integer for which every coloring of {1,2,...,Sm(k)}\{1,2,...,S_m(k)\} yields a monochromatic kk-term semi-progression of scope mm. We obtain an exponential lower bound on Sm(k)S_m(k) for all m=O(1)m=O(1). Our approach also yields a marginal improvement on the best known lower bound for the analogous Ramsey function for quasi-progressions, which are sequences whose successive differences lie in a small interval.Comment: 6 page

    Farmers’ Rights in International Law: Multiple Regimes and Implications for Conceptualisation

    Get PDF
    There are, at least, three different but complementary international legal regimes that deal directly or indirectly with farmers’ rights. Among these multilateral treaties, FAO Treaty expressly addresses farmers’ rights. At the same time, multilateral treaties such as the Convention on Biological Diversity and the UPOV Convention refer to farmers’ rights indirectly. In this background, this paper examines these legal regimes that deal with farmers’ rights

    On Permutations Avoiding Short Progressions

    Full text link
    We improve the lower bound on the number of permutations of {1,2,...,n} in which no 3-term arithmetic progression occurs as a subsequence, and derive lower bounds on the upper and lower densities of subsets of the positive integers that can be permuted to avoid 3-term and 4-term APs. We also show that any permutation of the positive integers must contain a 3-term AP with odd common difference as a subsequence, and construct a permutation of the positive integers that does not contain any 4-term AP with odd common difference.Comment: 4 page

    Neural Graph Machines: Learning Neural Networks Using Graphs

    Full text link
    Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely "Neural Graph Machines", that can combine the power of neural networks and label propagation. This work generalises previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks.Comment: 9 page

    Groundwater Legal Regime in India: Towards Ensuring Equity and Human Rights

    Get PDF
    This paper examines the existing and evolving groundwater law in India in the context of its capacity to ensure equity, sustainability and realisation of human rights. The critical evaluation of the existing legal framework is followed by an analysis of key gaps in the existing legal framework. This paper also aims to suggest basic principles, norms and approaches that should form as underlying elements of a comprehensive groundwater law capable of ensuring sustainability, equity and human rights
    corecore