2,433 research outputs found

    A new heap game

    Full text link
    Given k3k\ge 3 heaps of tokens. The moves of the 2-player game introduced here are to either take a positive number of tokens from at most k1k-1 heaps, or to remove the {\sl same} positive number of tokens from all the kk heaps. We analyse this extension of Wythoff's game and provide a polynomial-time strategy for it.Comment: To appear in Computer Games 199

    Game saturation of intersecting families

    Get PDF
    We consider the following combinatorial game: two players, Fast and Slow, claim kk-element subsets of [n]={1,2,...,n}[n]=\{1,2,...,n\} alternately, one at each turn, such that both players are allowed to pick sets that intersect all previously claimed subsets. The game ends when there does not exist any unclaimed kk-subset that meets all already claimed sets. The score of the game is the number of sets claimed by the two players, the aim of Fast is to keep the score as low as possible, while the aim of Slow is to postpone the game's end as long as possible. The game saturation number is the score of the game when both players play according to an optimal strategy. To be precise we have to distinguish two cases depending on which player takes the first move. Let gsatF(In,k)gsat_F(\mathbb{I}_{n,k}) and gsatS(In,k)gsat_S(\mathbb{I}_{n,k}) denote the score of the saturation game when both players play according to an optimal strategy and the game starts with Fast's or Slow's move, respectively. We prove that Ωk(nk/35)gsatF(In,k),gsatS(In,k)Ok(nkk/2)\Omega_k(n^{k/3-5}) \le gsat_F(\mathbb{I}_{n,k}),gsat_S(\mathbb{I}_{n,k}) \le O_k(n^{k-\sqrt{k}/2}) holds

    Ten Misconceptions from the History of Analysis and Their Debunking

    Full text link
    The widespread idea that infinitesimals were "eliminated" by the "great triumvirate" of Cantor, Dedekind, and Weierstrass is refuted by an uninterrupted chain of work on infinitesimal-enriched number systems. The elimination claim is an oversimplification created by triumvirate followers, who tend to view the history of analysis as a pre-ordained march toward the radiant future of Weierstrassian epsilontics. In the present text, we document distortions of the history of analysis stemming from the triumvirate ideology of ontological minimalism, which identified the continuum with a single number system. Such anachronistic distortions characterize the received interpretation of Stevin, Leibniz, d'Alembert, Cauchy, and others.Comment: 46 pages, 4 figures; Foundations of Science (2012). arXiv admin note: text overlap with arXiv:1108.2885 and arXiv:1110.545

    A Cauchy-Dirac delta function

    Full text link
    The Dirac delta function has solid roots in 19th century work in Fourier analysis and singular integrals by Cauchy and others, anticipating Dirac's discovery by over a century, and illuminating the nature of Cauchy's infinitesimals and his infinitesimal definition of delta.Comment: 24 pages, 2 figures; Foundations of Science, 201

    Semi-Hard Scattering Unraveled from Collective Dynamics by Two-Pion Azimuthal Correlations in 158 A GeV/c Pb + Au Collisions

    Full text link
    Elliptic flow and two-particle azimuthal correlations of charged hadrons and high-pTp_T pions (pT>p_T> 1 GeV/cc) have been measured close to mid-rapidity in 158A GeV/cc Pb+Au collisions by the CERES experiment. Elliptic flow (v2v_2) rises linearly with pTp_T to a value of about 10% at 2 GeV/cc. Beyond pTp_T\approx 1.5 GeV/cc, the slope decreases considerably, possibly indicating a saturation of v2v_2 at high pTp_T. Two-pion azimuthal anisotropies for pT>p_T> 1.2 GeV/cc exceed the elliptic flow values by about 60% in mid-central collisions. These non-flow contributions are attributed to near-side and back-to-back jet-like correlations, the latter exhibiting centrality dependent broadening.Comment: Submitted to Phys. Rev. Letters, 4 pages, 5 figure

    Leibniz's Infinitesimals: Their Fictionality, Their Modern Implementations, And Their Foes From Berkeley To Russell And Beyond

    Full text link
    Many historians of the calculus deny significant continuity between infinitesimal calculus of the 17th century and 20th century developments such as Robinson's theory. Robinson's hyperreals, while providing a consistent theory of infinitesimals, require the resources of modern logic; thus many commentators are comfortable denying a historical continuity. A notable exception is Robinson himself, whose identification with the Leibnizian tradition inspired Lakatos, Laugwitz, and others to consider the history of the infinitesimal in a more favorable light. Inspite of his Leibnizian sympathies, Robinson regards Berkeley's criticisms of the infinitesimal calculus as aptly demonstrating the inconsistency of reasoning with historical infinitesimal magnitudes. We argue that Robinson, among others, overestimates the force of Berkeley's criticisms, by underestimating the mathematical and philosophical resources available to Leibniz. Leibniz's infinitesimals are fictions, not logical fictions, as Ishiguro proposed, but rather pure fictions, like imaginaries, which are not eliminable by some syncategorematic paraphrase. We argue that Leibniz's defense of infinitesimals is more firmly grounded than Berkeley's criticism thereof. We show, moreover, that Leibniz's system for differential calculus was free of logical fallacies. Our argument strengthens the conception of modern infinitesimals as a development of Leibniz's strategy of relating inassignable to assignable quantities by means of his transcendental law of homogeneity.Comment: 69 pages, 3 figure

    Azimuthal anisotropy of pi^0 and eta mesons in Au+Au collisions at sqrt(s_NN)=200 GeV

    Full text link
    The azimuthal anisotropy coefficients v_2 and v_4 of pi^0 and eta mesons are measured in Au+Au collisions at sqrt(s_NN)=200 GeV, as a function of transverse momentum p_T (1-14 GeV/c) and centrality. The extracted v_2 coefficients are found to be consistent between the two meson species over the measured p_T range. The ratio of v_4/v_2^2 for pi^0 mesons is found to be independent of p_T for 1-9 GeV/c, implying a lack of sensitivity of the ratio to the change of underlying physics with p_T. Furthermore, the ratio of v_4/v_2^2 is systematically larger in central collisions, which may reflect the combined effects of fluctuations in the initial collision geometry and finite viscosity in the evolving medium.Comment: 384 authors, 71 institutions, 11 pages, 9 figures, and 2 tables. Submitted to Physical Review C. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Centrality categorization for R_{p(d)+A} in high-energy collisions

    Full text link
    High-energy proton- and deuteron-nucleus collisions provide an excellent tool for studying a wide array of physics effects, including modifications of parton distribution functions in nuclei, gluon saturation, and color neutralization and hadronization in a nuclear environment, among others. All of these effects are expected to have a significant dependence on the size of the nuclear target and the impact parameter of the collision, also known as the collision centrality. In this article, we detail a method for determining centrality classes in p(d)+A collisions via cuts on the multiplicity at backward rapidity (i.e., the nucleus-going direction) and for determining systematic uncertainties in this procedure. For d+Au collisions at sqrt(s_NN) = 200 GeV we find that the connection to geometry is confirmed by measuring the fraction of events in which a neutron from the deuteron does not interact with the nucleus. As an application, we consider the nuclear modification factors R_{p(d)+A}, for which there is a potential bias in the measured centrality dependent yields due to auto-correlations between the process of interest and the backward rapidity multiplicity. We determine the bias correction factor within this framework. This method is further tested using the HIJING Monte Carlo generator. We find that for d+Au collisions at sqrt(s_NN)=200 GeV, these bias corrections are small and vary by less than 5% (10%) up to p_T = 10 (20) GeV. In contrast, for p+Pb collisions at sqrt(s_NN) = 5.02 TeV we find these bias factors are an order of magnitude larger and strongly p_T dependent, likely due to the larger effect of multi-parton interactions.Comment: 375 authors, 18 pages, 16 figures, 4 tables. Submitted to Phys. Rev. C. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Upsilon (1S+2S+3S) production in d+Au and p+p collisions at sqrt(s_NN)=200 GeV and cold-nuclear matter effects

    Full text link
    The three Upsilon states, Upsilon(1S+2S+3S), are measured in d+Au and p+p collisions at sqrt(s_NN)=200 GeV and rapidities 1.2<|y|<2.2 by the PHENIX experiment at the Relativistic Heavy-Ion Collider. Cross sections for the inclusive Upsilon(1S+2S+3S) production are obtained. The inclusive yields per binary collision for d+Au collisions relative to those in p+p collisions (R_dAu) are found to be 0.62 +/- 0.26 (stat) +/- 0.13 (syst) in the gold-going direction and 0.91 +/- 0.33 (stat) +/- 0.16 (syst) in the deuteron-going direction. The measured results are compared to a nuclear-shadowing model, EPS09 [JHEP 04, 065 (2009)], combined with a final-state breakup cross section, sigma_br, and compared to lower energy p+A results. We also compare the results to the PHENIX J/psi results [Phys. Rev. Lett. 107, 142301 (2011)]. The rapidity dependence of the observed Upsilon suppression is consistent with lower energy p+A measurements.Comment: 495 authors, 11 pages, 9 figures, 5 tables. Submitted to Phys. Rev. C. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm
    corecore